Skip to content

Commit

Permalink
[docs] fix readme typos; use the same scala style in the examples
Browse files Browse the repository at this point in the history
This closes apache#1743
  • Loading branch information
vasia committed Mar 1, 2016
1 parent a922473 commit e8e88af
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,17 @@ Learn more about Flink at [http:https://flink.apache.org/](http:https://flink.apache.org/)

* Fault-tolerance with *exactly-once* processing guarantees

* Natural back-pressure in streaming programs.
* Natural back-pressure in streaming programs

* Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)

* Built-in support for iterative programs (BSP) and in the DataSet (batch) API.
* Built-in support for iterative programs (BSP) in the DataSet (batch) API

* Custom memory management to for efficient and robust switching between in-memory and out-of-core data processing algorithms.
* Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms

* Compatibility layers for Apache Hadoop MapReduce and Apache Storm.
* Compatibility layers for Apache Hadoop MapReduce and Apache Storm

* Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem.
* Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem


### Streaming Example
Expand All @@ -53,8 +53,8 @@ case class WordWithCount(word: String, count: Long)

val text = env.readTextFile(path)

val counts = text.flatMap { _.split("\\W+") }
.map { WordWithCount(_, 1) }
val counts = text.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.groupBy("word")
.sum("count")

Expand Down

0 comments on commit e8e88af

Please sign in to comment.