Skip to main content
We're enhancing our site and your experience, so please keep checking back as we evolve.
Back to News
2017 – Establishing new foundations for our data platform

2017 – Establishing new foundations for our data platform

3 December 2018
  • Software Engineering

Trying to build the best data platform for quantitative research and execution is no small task. Our goal is to make sure that research is not limited by data availability. We want to be at the state of the art, which is essential in a competitive industry like ours.

It’s a bold objective, so are we that serious about it? I’ll let you judge.

In January 2017, my team was exclusively working in a Windows and .Net environment. .Net is great in many ways but it also lacks the depth of open source projects available on other platforms. Windows, well… it improved a lot the last few years but it’s still behind in terms of automation and containerization.

One year later… 

We’ve decided to pivot towards the JVM. We’ve migrated significant workloads on Spark and will continue to do so in 2018. Our cluster is growing very rapidly since we have petabyte scale datasets to process. We use Spark extensively for time series analysis and processing, that’s an area where Spark can do better and we have very exciting projects ahead of us to augment it on that front.

On the other end of the spectrum, while we were already quite good at writing microservices, we kept rewriting a lot of infrastructure ourselves. For example we built our own C# Kafka client 18 months ago to avoid message reordering when writes fail (the open source libraries have got a lot better since!). Whilst this is fun, we can’t hope to ever catch up with OSS so we decided to write our new services with Java/spring boot to benefit from the more mature JVM ecosystem.

That meant we had to spend a bit of time on getting our build system right to support trunk based development, reproducible builds, with binary dependencies. Surprisingly this is not really a solved problem in the JVM world but thanks to various projects (notably Netflix Nebula, awesome work guys!) we managed to assemble a very satisfying CI pipeline that achieves very fast builds.

Operations was next, which is very important for us since downtime is a massive opportunity cost. After initially looking into Mesos / DCOS at the beginning of the year, we settled on Kubernetes. We worked on making it highly available, secured and are now focused on building a great continuous delivery pipeline with Helm and Istio. Prometheus is also a great step forward in term of observability. Next steps? Spinnaker, storage as a Service, windows nodes, improved multi-tenancy and secret management, chaos engineering, … And probably a lot more!

Long story short, we’ve been busy. Everybody in the team learned a lot. We made significant changes to our infrastructure while extending data coverage and delivering new features to our stakeholders. We’re also very close to being able to create and deploy a new service in production by running one command line – that’s the level of automation we are aiming for.

It’s only the beginning though, we’re only at the start of this journey, growing fast and always on the lookout for great people. If you think you could enjoy this kind of environment and have some background in any of the stuff I mentioned, please drop me a message and let’s have a coffee. At worst, we’ll have a good time talking about tech. At best you’ll get a new cool job paying top of the market!

Julien Lavigne Du Cadet- Software Engineer

Stay up to date with
G-Research