I just wrapped up this webinar. Thanks for all for attending!
Here are the slides & recording of the webinar.
And here are the slides:
Spark processes events in ‘micro batches’. For example I can define the ‘batch’ interval to be 5 seconds. Spark will process what ever number of events captured in that batch (could be none, one, ten or thousand!). Currently the lowest batch time is about half-a-second (500 ms)
This presentation will give you more details.
Usually it is a good practice to pair Spark Streaming with HDFS — Spark uses HDFS for checkpointing (to save streaming status periodically)