Start of main content
Apache Spark as an in-memory-only data processing engine?
Unlike how people usually think about Apache Spark it keeps blocks of data in memory and (that could be that surprising fact) on disk. The disk is usually for spilling data when running low on memory. That begs the question if it's ever possible to set up Spark so it never touches hard drives and hence be memory-fast? That's the question that Jacek is going to answer during the talk. You'll know a bit about the internals of Apache Spark and what parts are or could be memory-only and what challenges it poses.