Microservices And Change Data Capture With Kafka

button-icon-arrow-right
button-icon-arrow-left

button-icon-arrow-leftBack

Event

Microservices and change data capture with Kafka

9 October 2019

London

Added 01-Jan-1970

6.00pm - Doors open, Food + Drinks, Network

6.30pm - Talk - "Monitoring and Orchestration of Your Microservices with Kafka and Zeebe" with Bernd Rücker from Camunda (https://camunda.com/)

In this talk, we’ll demonstrate an approach based on real-life projects using the open-source workflow engine zeebe.io to orchestrate microservices. Zeebe can connect to Kafka to coordinate workflows that span many microservices, providing end-to-end process visibility without violating the principles of loose coupling and service independence. Once an orchestration flow starts, Zeebe ensures that it is eventually carried out, retrying steps upon failure. In a Kafka architecture, Zeebe can easily produce events (or commands) and subscribe to events that will be correlated to workflows. Along the way, Zeebe facilitates monitoring and visibility into the progress and status of orchestration flows. Internally, Zeebe works as a distributed, event-driven and event-sourced system, making it not only very fast but horizontally scalable and fault-tolerant and able to handle the throughput required to operate alongside Kafka in a microservices architecture. Expect not only slides but also fun little live-hacking sessions and real-life stories.

Bernd is a co-founder of Camunda. But foremost he is a software developer and consultant. He is doing BPM for more than 10 years now and committed in various open-source workflow engines over time. By coaching countless projects he got totally passionate about the whole 'developer-friendly BPM' story. When has some spare time he gives talks at conferences or writes articles and books (e.g. Real-Life BPMN book)

7:20 pm - Talk - "Technology choices for Kafka and change data capture" with Andrew Schofield and Katherine Stanley from IBM

Changes to data can be thought of as a type of event. By publishing these events to Kafka, the evolving data can be easily distributed to many different applications in a scalable, loosely coupled way. Change data capture (CDC) is a technique for converting the changes in a data store into a stream of events. The basic idea of capturing changes and flowing the resulting events into Kafka can be achieved in a variety of ways, with different levels of data consistency and performance. This talk will introduce CDC, discuss some of the options available and delve into how they work.

Andrew Schofield is a Senior Technical Staff Member in the Hybrid Integration group of IBM Cloud. He has more than 25 years of experience in messaging middleware and the Internet of Things, with particular expertise in the areas of data integrity, transactions, high availability and performance. Andrew is an active contributor to Apache Kafka. He works at the Hursley Park laboratory in England.

Katherine Stanley is a Software Engineer in the IBM Event Streams team based in the UK. Through her work on IBM Event Streams, she has gained experience running Apache Kafka on Kubernetes and running enterprise Kafka applications. In her previous role, she specialised in cloud-native Java applications and microservices architectures. Katherine enjoys sharing her experiences and has presented at conferences around the world, including the Kafka Summits in New York and London.

 

Top