Конференция завершена. Ждем вас на Highload++ в следующий раз!
Москва, СКОЛКОВО
8 и 9 ноября 2018

Building microservices using KafkaБэкенд, теория программирования

Доклад отклонён
Anugrah S
Go-Jek

I am Anugrah. I work as a product engineer at Gojek primarily concentrating on Gofood. Previously I worked as a data engineer with Gojek, having built the data pipeline using Kafka which acts like the heart of data at Gojek. I am also intereseted in Open source having contributed to Apache Kafka. I also am a language enthusiast, having learned multiple programming languages and paradigms over a period of 3 years. I love building distributed systems at scale.

Тезисы

At GO-JEK, we handle daily order volume that is the largest in the world except for China. The restaurants depend on GO-RESTO, our merchant facing app. As we scaled, the merchant order report generation became slow. This talk is about how we built a microservice using our Kafka pipeline to solve it.

Microservices are the building blocks that power a post-cloud digital landscape. They help us in building services which are scalable and eases the deployment and development process. As scale rises, simple, direct HTTP calls from service to service become bottlenecks, and cascade failures through the system. A distributed, horizontally scalable bus like Kafka becomes essential.

At GO-JEK, we handle 2X of India’s total food tech daily order volume. The restaurants that make the food depend on GO-RESTO, our merchant facing app. As we scaled, GO-RESTO started having slow queries for displaying the complete order report for a merchant in a day. We could not delete the underlying table as the table was used by other queries. We needed to move this data and the query away from the main service to uphold our SLA.

In order to move this data away from the main service, we started to publish the data to Kafka. Using the ESB Generic Log Consumer that was written for streaming data from Kafka to Postgres, we were able to get the data required to the backend database of the new microservice that we were writing. Then we wrote the backend service capable of handling the queries. In the end, we were able to reduce the latencies as well as take out the load on the main service.

Масштабирование с нуля
,
Machine Learning

Другие доклады секции Бэкенд, теория программирования

Rambler's Top100