Tuesday May 2
11:40 –
12:30
Vevey 1-2
Processing Data of Any Size with Apache Beam
Rewriting code as you scale is a terrible waste of time. You have perfectly working code, but it doesn't scale. You really need code that works at any size, whether that's a megabyte or a terabyte. Beam allows you to learn a single API and process data as it grows. You don't have to rewrite at every step.
In this session, we will talk about Beam and its API. We'll see how Beam execute on Big Data or small data. We'll touch on some of the advanced features that make Beam an interesting choice.
-
Processing Data of Any Size with Apache BeamJesse AndersonTuesday May 2, 11:40
-
Apache Spark Beyond Shuffling - Why it isn't Magic - but also where there is some really cool MagicHolden KarauTuesday May 2, 15:40
-
Apache Flink - The State of the Art in Streaming ComputationJamie GrierTuesday May 2, 13:30
-
Fast Data Architectures for Streaming ApplicationsDean WamplerTuesday May 2, 10:35
-
Cloud Native Data PipelinesSid AnandTuesday May 2, 14:35
-
Stream All Things - Patterns of Modern Data IntegrationGwen ShapiraTuesday May 2, 16:45