Schedule

  • The time in the program is for your time zone .

  • The program hasn’t been finally approved yet, so there still might be some changes.

Download schedule
  1. October 5

    Break

    Break

    Talk

    Apache Spark SQL. Extend and Manage

    How to configure and modify Apache Spark for your tasks without rewriting the framework. I will tell you about approaches to expanding the functionality of Spark SQL without interfering with the platform's source code. You will learn about creating your own data sources, developing user functions for specialized processing, and implementing optimization rules that adapt to various requests.

    Break

    Talk

    What Metastore Is

    What metastore is, how it works in the big data ecosystem, what solutions exist on the market and why we decided to develop our own. I will share practical experience, architecture and lessons we have learned.

    Break

    Talk

    Spark Connect: A New Approach to Working with Apache Spark

    I will tell you about Spark Connect — a new approach to working with Apache Spark, which allows you to develop the client part of the application in any language and not depend on the JVM. We will talk about the architecture of Spark Connect and its differences from classic Spark. You will learn about a project where we used Spark Connect API for C++.

    Break

    Talk

    Spark is Done!

    Let's talk about Spark. What did it give data engineers? Why do many of us use it?

    Spark has been around for over 15 years. What problems do we face when using it? Is there anything better? Is it already possible to replace it with something?

    Why is %SQLEngineName% slowing down? How can one fix this? Benchmarks, open source, and the like.

    Break

    Talk

    Choosing a Database for Storing a Lot of Money

    The database is already covered with read replica, but it is still not enough — what should one do?

    I will tell you about how we chose a fault-tolerant and scalable database for storing financial data, which options were eliminated and by what criteria. Why we chose YugabyteDB and about our experience with it.

    Networking and Afterparty

  2. October 6

    Talk

    StarRocks: the Reality of the Modern Data Platform

    The data platform in our company has existed for more than 5 years, during this time it has absorbed a lot of trendy (and not so trendy) solutions. I will tell you how we tried to choose our future among ClickHouse, Greenplum and Trino, and found StarRocks. 

    Break

    Break

    Talk

    Third Party Runtime Engines for Apache Spark: Experience of Using

    Experience of using Comet and Gluten (Velox) execution engines – from the introduction and features of the build to the results of testing on real ETLs. I will tell you about pitfalls and non-obvious points, show the results of work and consider cases when these engines are useful and when they don't work at all.

    Break

    Break

    Talk

    Vector Search Algorithms in YDB

    YDB has undergone a significant development path from applying basic vector search techniques to creating a scalable and efficient vector index. The talk presents a detailed analysis of the stages of evolution of vector search in YDB, including analysis of complexities and engineering solutions. 

    Break

    Talk

    How We Improved Data Management Processes in Airflow: Practical Cases

    I'll tell you how we use Airflow in practice: from the pain of sensors to the convenience of datasets, from static DAGs with a bunch of files to dynamic ones, and from standard features to our own custom solutions that will not leave those who are faced with the actual operation of Airflow indifferent.

    Talk

    How We Built a Data Lakehouse Platform on Apache Ozone

    In this talk, I will tell you how we migrated from a platform based on Vertica, HDFS to the new Dota 2 (the second version of our internal analytics platform)) architecture based on Apache Ozone (S3), Trino, Spark and Iceberg. I will share our experience in choosing storage, explain why we abandoned HDFS and why we chose Apache Ozone as an on-prem implementation of S3.

    Break

We will add more talks soon.

We are actively adding to the program. Sign up for our newsletter to stay informed.

Subscribe