Vladislav Shpilevoy PRO
Backend C++ developer at VirtualMinds. Database C developer at Tarantool.
The presentation shows a Git workflow - "Atomic Flow" - which helps to manage the so called high-stakes projects. It shows the cost of inefficient Git usage for the team, explains the pros and cons of merge commits and rebasing, shows the huge importance of the "atomic commits", and closes up with some benchmarks and real life examples how "Atomic Flow" was helping people.
The deck presents my experience with trying to vibecode a decent HTTP client-server library. The key idea - I am not allowed to write any code. Only English text in the VSCode Copilot dialogue window.
The presentation shows a Git workflow - "Atomic Flow" - which helps to manage the so called high-stakes projects. It shows the cost of inefficient Git usage for the team, explains the pros and cons of merge commits and rebasing, shows the huge importance of the "atomic commits", and closes up with some benchmarks and real life examples how "Atomic Flow" was helping people.
My talk is dedicated to the performance of C++ servers using an alternative to boost::asio for massively parallel network operations. boost::asio is effectively the standard for network programming in C++, but in rare cases, it may be unavailable or insufficient for various reasons. Drawing from my experience in high-performance projects, I developed a new task scheduling algorithm, built a networking library based on it, and am presenting them in this talk. The most interesting aspects include fair CPU load distribution, support for C++ coroutines, formal verification with TLA+, and reproducible benchmarks demonstrating an N-times speedup over boost::asio. The project is open-source: https://github.com/Gerold103/serverbox.
Мой доклад посвящен производительности серверов на C++ с использованием альтернативы boost::asio для массово параллельной сетевой работы. Boost::asio — фактически стандарт для сетевого кода на C++, но в редких случаях он недоступен или недостаточен по разным причинам. Используя опыт работы над высокопроизводительными проектами, я разработал новый алгоритм планирования задач, построил на его основе сетевую библиотеку и представляю их в докладе. Самые интересные детали — справедливое распределение нагрузки на процессор, поддержка C++ корутин, формальная верификация на TLA+ и воспроизводимые бенчмарки, показывающие ускорение в N раз от boost::asio. Проект в открытом доступе: https://github.com/Gerold103/serverbox.
I categorize the primary sources of code performance degradation into three groups: - Thread contention. For instance, too hot mutexes, overly strict order in lock-free operations, and false sharing. - Heap utilization. Loss is often caused by frequent allocation and deallocation of large objects, and by the absence of intrusive containers at hand. - Network IO. Socket reads and writes are expensive due to being system calls. Also they can block the thread for a long time, resulting in hacks like adding tens or hundreds more threads. Such measures intensify contention, as well as CPU and memory usage, while neglecting the underlying issue. I present a series of concise and straightforward low-level recipes on how to gain performance via code optimizations. While often requiring just a handful of changes, the proposals might amplify the performance N-fold. The suggestions target the mentioned bottlenecks caused by certain typical mistakes. Proposed optimizations might render architectural changes not necessary, or even allow to simplify the setup if existing servers start coping with the load effortlessly. As a side effect, the changes can make the code cleaner and reveal more bottlenecks to investigate.
I categorize the primary sources of code performance degradation into three groups: - Thread contention. For instance, too hot mutexes, overly strict order in lock-free operations, and false sharing. - Heap utilization. Loss is often caused by frequent allocation and deallocation of large objects, and by the absence of intrusive containers at hand. - Network IO. Socket reads and writes are expensive due to being system calls. Also they can block the thread for a long time, resulting in hacks like adding tens or hundreds more threads. Such measures intensify contention, as well as CPU and memory usage, while neglecting the underlying issue. I present a series of concise and straightforward low-level recipes on how to gain performance via code optimizations. While often requiring just a handful of changes, the proposals might amplify the performance N-fold. The suggestions target the mentioned bottlenecks caused by certain typical mistakes. Proposed optimizations might render architectural changes not necessary, or even allow to simplify the setup if existing servers start coping with the load effortlessly. As a side effect, the changes can make the code cleaner and reveal more bottlenecks to investigate.
Algorithm for a multithreaded task scheduler for languages like C, C++, C#, Rust, Java. C++ version is open-sourced. Features: (1) formally verified in TLA+, (2) even CPU usage across worker threads, (3) coroutine-like functionality, (4) almost entirely lock-free, (5) up to 10 million RPS per thread. Key points for the potential audience: fair task scheduling with multiple worker threads; open source; algorithms; TLA+ verified; up to 10 million RPS per thread; for backend programmers; algorithm for languages like C++, C, Java, Rust, C# and others.
Algorithm for a multithreaded task scheduler for languages like C, C++, C#, Rust, Java. C++ version is open-sourced. Features: (1) formally verified in TLA+, (2) even CPU usage across worker threads, (3) coroutine-like functionality, (4) almost entirely lock-free, (5) up to 10 million RPS per thread. Key points for the potential audience: fair task scheduling with multiple worker threads; open source; algorithms; TLA+ verified; up to 10 million RPS per thread; for backend programmers; algorithm for languages like C++, C, Java, Rust, C# and others.
Использование асинхронной репликации может приводить к потере транзакции, если мастер-узел выходит из строя сразу после ее коммита. Синхронная репликация позволяет добиться репликации транзакции на заданное число реплик до ее коммита, когда изменения становятся видны пользователю. В данном докладе рассматривается один из таких алгоритмов - Raft, и его применение в СУБД Tarantool.
Tarantool 2.6 was released in October of 2020. This is the biggest release in several years, which brings beta version of synchronous replication and transaction manager for memtx storage engine. The talk sheds more light on the key features of the release.
Usage of asynchronous replication can lead to transaction loss if the master node fails immediately after its commit. Synchronous replication helps to achieve replication of a transaction on a specified number of replicas before its commit, before the changes become visible to the user. This talk describes one of such algorithms - Raft, and its application in Tarantool DBMS.
Использование асинхронной репликации может приводить к потере транзакции, если мастер-узел выходит из строя сразу после ее коммита. Синхронная репликация позволяет добиться репликации транзакции на заданное число реплик до ее коммита, когда изменения становятся видны пользователю. В данном докладе рассматривается один из таких алгоритмов - Raft, и его применение в СУБД Tarantool.
Users and groups, rights. Attributes and access rights of files and processes. Process groups, sessions. /etc/passwd, /etc/group. Sticky bit. Process daemonization. Object attributes.
Advanced IO. Non-blocking IO operations. File blocks: flock, lockf, fcntl. Multiplexed IO: select, poll, kqueue. Async IO: aio_read/write.
Network. Short history from ARPANET. Canonical OSI model, real TCP/IP model, protocol stack. Network implementation in the kernel. User space interface socket(), connect(), close(). TCP and UDP.
SWIM is a protocol for detection and monitoring of cluster nodes, distribution of events and data between them. The protocol is lightweight, decentralized and its speed does not depend on cluster size. The talk describes how SWIM protocol is organized, how and with which extensions it is implemented in Tarantool.
IPC. Pipe, FIFO. XSI: message queue, semaphore, shared memory. POSIX semaphore. Sockets: API, byte ordering. Domain sockets.