Last week, I attended Spark Summit EU 2017. I joined Hadoop Summit several times, but it was the first time for me to go to Spark Summit.
Spark Summit is a tech conference which focus on the topic of ecosystem, use cases and technology deep dive around Apache Spark. Apache Spark is one of the most active open source community in Big Data technology. So the conference provided me a lot of very active and interesting topics.
The first impression I had in the conference was deep learning. Spark users are now very interested in machine learning and deep learning use cases. Actually most development of ecosystems and tools looks happening mainly in this field. I’m also interested in machine learning and deep learning technology. That was the good thing to me. Some topics interesting to me were here:
- Deep Learning pipeline: It provides deep learning algorithms which is compatible Spark ML pipeline.
- Apache Spark Memory Model explained the detail of memory management architecture used in Apache Spark.
- CERN introduced their next acceraletor and logging service for their own scientific research. One experiment of accelerator generates over 1PB! This kind of topic sounds peculiar to EU conference.
- I could understand each suitable use cases from comparison of 3 APIs (RDD, DataFrame, Dataset).
Spark Summit was a very exciting conference to me because it provided a lot of what I want to listen. I don’t know much of industrial technology conferene focusing on deep learning. (Except for TensorFlow dev Summit?) Anyway I want to go the next Spark Summit if I get a chance.
Thanks