Lucas Jellema | Devoxx

Lucas Jellema
Lucas Jellema Twitter

From AMIS

Lucas Jellema is solution architect and CTO at AMIS, The Netherlands. The running theme through most of his activities is transfer of knowledge and enthusiasm (and live demos). Lucas is JavaOne 2015 Rockstar, Oracle Developer Champion and ACE Director and a frequent speaker at conferences such as Oracle Code, Oracle OpenWorld, JavaOne and Devoxx. He publishes techy stuff at Github, Slideshare, Medium, DZone, OTN, and the AMIS Technology Blog (https://technology.amis.nl). He is the author of two books with O’Reilly Press.

Blog: technology.amis.nl

ssj Server Side Java

Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server Push

Conference

Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients. Kafka Streams and KSQL are used to analyze the real time events in real time and publish events with the live findings.

cloud Cloud, Containers & Infrastructure

Automated testing on steroids – Trick for managing test data using Docker snapshots

Quickie Sessions

Automated testing is important. We all know that we should do it. We also know that this can be painful, for many reasons. One of the most agonizing aspects of automated testing is the handling of the data. In order to run even the simplest of tests against the user interface, a service or API or even a stored procedure typically requires that a proper starting point needs to be established in the database with respect to the data. Complex set up steps need to prepare various records to ensure the test can even start and afterwards in similarly complex tear down scripts we have to clean up after the test. This session demonstrates how this hardship can be a thing of the past. Using snapshots of a test database in a Docker container with a managed test data set that supports all tests, we can create automated tests without any set up or tear down effort. These tests can run very fast, concurrently, and whenever and wherever you like them to run. This way of working opens up much higher test coverage and much increased productivity for developers and testers.