For Developers, by Developers

GOTO Blog

[medium.com/feel-the-hum-of-your-system] Logging with Kubernetes and Humio

Kubernetes is an interesting problem when it comes to logging. With all those containers created and destroyed, logs become the only dependable window into what’s happening, but working with them becomes significantly more complex. Humio is all about getting straight to the most important detail in your logs, especially when those logs are generated in huge volumes. That’s why we’ve created a integration between Humio and Kubernetes: kubernetes2humio.

[kitchensoap.com] Multiple Perspectives On Technical Problems and Solutions

Over the years, a number of people have asked about the details surrounding Etsy’s architecture review process. In this post, I’d like to focus on the architecture review working group’s role in facilitating dialogue about technology decision-making. 

[confluent.io] Publishing with Apache Kafka at The New York Times

At The New York Times we have a number of different systems that are used for producing content. We have several Content Management Systems, and we use third-party data and wire stories. Furthermore, given 161 years of journalism and 21 years of publishing content online, we have huge archives of content that still need to be available online, that need to be searchable, and that generally need to be available to different services and applications. On the other side we have a wide range of services and applications that need access to this published content — there are search engines, personalization services, feed generators, as well as all the different front-end applications, like the website and the native apps. Whenever an asset is published, it should be made available to all these systems with very low latency — this is news, after all — and without data loss. This article describes a new approach we developed to solving this problem, based on a log-based architecture powered by Apache KafkaTM. We call it the Publishing Pipeline. The focus of the article will be on back-end systems. Specifically, we will cover how Kafka is used for storing all the articles ever published by The New York Times, and how Kafka and the Streams API is used to feed published content in real-time to the various applications and systems that make it available to our readers. 

[caitiem.com] Resources for Getting Started with Distributed Systems

[caitiem.com] Resources for Getting Started with Distributed Systems

I’m often asked how to get started with Distributed Systems, so this post documents my path and some of the resources I found most helpful. It is by no means meant to be an exhaustive list. It is worth noting that I am not classically trained in Distributed Systems. I am mostly self taught via independent study and on the job experience. I do have a B.S. in Computer Science from Cornell, but focused mostly on graphics and security in my specialization classes. My love of Distributed Systems and education in it came once I entered industry. The moral of this story is that understanding distributed systems doesn’t require academic intervention to learn and excel at.

[gocd.org] You can’t buy DevOps, but you may need to sell it

Having worked in the continuous delivery and DevOps space for several years, I often get frustrated when I hear "Oh, I’ll buy this tool and then I can do the DevOps." As a tool vendor, I wish it worked like that. In reality, it takes a lot of effort for anyone to drive and socialize change in an organization. In this blog series, I am going to help you get stakeholder buy-in and also clear up some misconceptions about “selling” DevOps.

[medium.com/@copyconstruct] Monitoring and Observability

What is the difference between “monitoring” and “observability”, if any? Or is the latter just the latest buzzword on the block, to be flogged and shoved down our throats until it has been milked for all its worth?

[allthingsdistributed.com] AI for everyone – How companies can benefit from the advance of machine learning

Click here to edit the contentWhen a technology has its breakthrough, can often only be determined in hindsight. In the case of artificial intelligence (AI) and machine learning (ML), this is different. ML is that part of AI that describes rules and recognizes patterns from large amounts of data in order to predict future data. Both concepts are virtually omnipresent and at the top of most buzzword rankings. Personally, I think – and this is clearly linked to the rise of AI and ML – that there has never been a better time than today to develop smart applications and use them. Why? Because three things are coming together.

[container-solutions.com] Securing Microservices with Docker from Adrian Mouat

The excellent Adrian Mouat – Docker Captain, author of “Using Docker” and frequent GOTO speaker – recently gave a webinar on how to use Docker to secure your microservice containers. So what was the gist? Well, Adrian covered so much I’m going to break this into a two-parter. In the first part I’ll talk about the basics of a healthy Dockerfile and in the second part I’ll talk about safe deployment.

Smart energy consumption insights with Elasticsearch and Machine Learning

At home we have a Youless device which can be used to measure energy consumption. You have to mount it to your energy meter so it can monitor energy consumption. The device then provides energy consumption data via a RESTful API. We can use this api to index energy consumption data into Elasticsearch every minute and then gather energy consumption insights by using Kibana and X-Pack Machine Learning. The goal of this blog is to give a practical guide how to set up and understand X-Pack Machine Learning, so you can use it in your own projects!

[blog.trifork.com] Heterogeneous microservices

Microservices architecture is increasingly popular nowadays. One of the promises is flexibility and easier working in larger organizations by reducing the amount of communication and coordination between teams. The thinking is, teams have their own service(s) and don’t depend on other teams, meaning they can work independently, thereby reducing coordination efforts. Especially with multiple teams and multiple services per team, this can mean there are quite a few services with quite different usage. Different teams can have different technology preferences, for example because they are more familiar with the one or the other. Similar different usage can mean quite different requirements, which might be easier to fulfill with the one or the other technology. How free or constrained should technology choices be in such an environment?

1 2 3 4 5 6 7 8 9 10 11