2023-07-25 –, 32-144
Data scientists need to work with various data sources and sinks in their projects. During the workshop I you will learn how you can work with standard data formats using DataFrames.jl. A special focus will be put on working with data that is larger than available RAM.
Data science pipelines created in Julia typically need to be integrated into larger workflows involving various tools and technologies. Therefore an important aspect is ensuring interoperability, especially for the case of large data that does not fit in RAM of a single machine.
During the workshop we discuss working with the following data formats:
Section 1: https://github.com/bkamins/JuliaCon2023-Tutorial
- statistical packages (Stata/SAS/SPSS and RData);
- databases (SQLite and DuckDB);
- Apache Parquet.
Section 2: https://github.com/quinnj/JuliaCon2023-Tutorial
- CSV;
- Apache Arrow;
- JSON.
The examples will DataFrames.jl that provides a representative implementation of Tables.jl table.
I am a researcher in the fields of operations research and computational social science.
For development I mostly use the Julia language.