Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Kong Summit Keynotes and Upcoming Content!

$
0
0

Last month we had our first annual Kong Summit, and I am thrilled to report that it was a great success! With several hundred attendees, 25 talks and sessions, and too many great conversations to count, the amount of positive feedback that we have received has been nothing short of incredible. From the start, I have known that there is something very special about our community, and the level of community involvement at Summit is another indicator of just what a powerful group you all are. It is very humbling and motivating to have such amazing community support, and I am proud of the work our team put in to make our first Summit a great two days. For those that weren’t able to make it, we hope to see you next year at Kong Summit ’19, and in the meantime, we’re excited to share the highlights from this year’s show with you all!

 

As a thank you to our community, we will also be releasing all of the talks and sessions, starting with Keynote presentations given by our CEO Augusto “Aghi” Marrietti, our CTO Marco Palladino, and our Principal Engineer Thibault Charbonnier. These can be viewed below.

I would also like to thank our wonderful guest speakers Kolton Andrus, Chad Fowler, Joseph Jacks, Dennis Kelly, Jonathan Kratter, Kin Lane, Erin Mckean, Komal Mangtani, Vijay Narayanan, Neha Narkhede, Alexandra Noonan, Naoya Okada, Marcel Da Cruz Pinto, Christian Posta, Guillermo Rauch, Chris Richardson, Colin Schaub, Gwen Shapira, and Jason Walker. All of your sessions were thought-provoking and informative, and we look forward to having you return for future Kong Summits!

We’ll follow up over the coming weeks with more talks from industry experts, Kong users, and leaders from Kong Inc. Here’s to a great first Kong Summit, and many more to come! Sign up here to get the videos delivered to your inbox as they come out and keep up to date on the latest about Kong Summit 2019!

 

 

The post Kong Summit Keynotes and Upcoming Content! appeared first on KongHQ.


Kong Gives Back: Volunteering with the Golden Gate National Parks Conservancy

$
0
0

Giving back to the community is a huge part of our DNA at Kong. Every quarter, we set aside a day for our employees to volunteer and give back. This time, we were excited to join the volunteer program at the Golden Gate National Parks Conservancy, a non-profit organization dedicated to preserving the Golden Gate National Parks and making them accessible for all communities.

We spent the afternoon at the Presidio Coastal Bluffs, where our fun and knowledgeable project leader, Yakuta Poonawalla, kicked the program off by telling us about the rich history of the area, including the diverse plant and wildlife. We then got to work restoring the habitat — moving dead branches, sectioning off areas with important plant life from visitors and hikers, and removing harmful weeds to prevent them from endangering other plant life in the area. It was a day of hard work, but it was very rewarding.

One of Kong’s core values is Champions, which translates to listening to and speaking up for customers, community, partners and each other — and as an extension, being a Champion for our planet. That’s why we are so thankful for organizations such as the Golden Gate National Parks Conservancy for the opportunities they offer our community to protect our special and vibrant ecosystem. As a company that was founded on an open source community, we know first-hand the power and impact of having a community come together to solve a shared problem.

We look forward to our next volunteer event and the chance to engage with our community again!

To learn more about Kong’s core values, culture and the impact we make, visit our Careers page at https://konghq.com/careers/.

 

The post Kong Gives Back: Volunteering with the Golden Gate National Parks Conservancy appeared first on KongHQ.

Fireside Chat with CTO Neha Narkhede

$
0
0

Marco and Neha chat, CTO to CTO.

We were very lucky to have some extremely prominent open source leaders speak at Kong Summit, including Neha Narkhede, who co-created Apache Kafka, co-founded Confluent and is now its CTO. Neha and Marco Palladino sat down for a far-ranging conversation about the history of Kafka, trends driving the software industry, strategies for open core businesses, and the challenges of being a CTO.

The conversation closed with both CTOs’ advice for engineers aspiring to found companies or become Chief Technology Officers.

Sign up for recordings and updates

All of the Kong Summit sessions were recorded, and we’ll be releasing the recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Fireside Chat with CTO Neha Narkhede appeared first on KongHQ.

Shrinking to Grow: What Small Can Do for Your Organization

$
0
0

The advantages of small

We had some big names join us at Kong Summit, and one of them was Chad Fowler, who leads startup developer advocacy at Microsoft. He gave a wonderful talk on why keeping things small is one of the best things you can do for yourself and your team.

During his talk, Chad outlined how almost everything we’ve seen in the evolution of software and systems points to one, fundamental truth: small things are more manageable than big things. Small iterations are better iterations. Small methods are better methods. Small teams are better teams. He discussed examples from sociology, psychology, and biology that explored how we can think small to build systems and organizations that can outlive us.

 

Sign up for recordings and updates

All of the Kong Summit sessions were recorded, and we’ll be releasing the recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Shrinking to Grow: What Small Can Do for Your Organization appeared first on KongHQ.

Microservices: Decomposing Applications for Testability and Deployability

$
0
0

Chris Richardson at Kong Summit

During the Kong Summit in September Chris Richardson, founder of CloudFoundry.com, gave an excellent talk describing the characteristics of microservice architectures. He outlined the benefits of microservices, including testability and deployability. He also discussed the drawbacks. Microservices aren’t a silver bullet, and Chris explained some of the factors to consider before deciding to break up a monolithic codebase. In the talk, Chris provided an overview of the microservice pattern language, describing architectural patterns that can help frame the decision to move to microservices, and design your implementation.

Sign up for recordings

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Microservices: Decomposing Applications for Testability and Deployability appeared first on KongHQ.

Kong Partners with Live Reply

$
0
0

We continue to expand our global reach through partners across the world. Most recently, we partnered with Live Reply, a premier system integrator partner in Germany, to further Kong’s awareness among leading European technology companies. A cyber security expert with offices in Düsseldorf, Frankfurt and Munich, Live Reply offers consulting, development, system integration expertise and solutions on behalf of leading companies in the Telco, Finance, Automotive and IoT sectors.

With more than 18,000 stars on Github and 45 million plus downloads, Kong is the most widely used open source API platform today. Kong Enterprise builds on our core open source offering to deliver a service control platform that brokers enterprises’ information between ever-proliferating microservices. Live Reply’s expertise in providing end-to-end secure API adoption for enterprise customers in mission-critical applications will help us expand our footprint across Germany.

Live Reply is the Reply Group specialist in digital services and solutions enabled by telco and media technologies. With European software distribution and strong localization service teams, Reply continues to grow as a key player in the international software industry. Through Live Reply, German customers can engage, learn and implement Kong Enterprise solutions from a top-tier, local partner.

We’re very excited to partner with Live Reply. Not only are our long-term goals closely aligned, but Live Reply is well-positioned to assist organizations as they move securely to microservices, serverless, and future architectures.

To learn more about Kong Enterprise in Germany, please contact Live Reply today!

The post Kong Partners with Live Reply appeared first on KongHQ.

Leveraging OpenAPI for Awesome APIs

$
0
0

OpenAPI: More than Just Documentation

Want to take your API to the next level? Then you should give your API an OpenAPI spec! An OpenAPI spec is more than just documentation — with it, you can leverage your API to generate client code, do automated testing and more! During Kong Summit, Wordnik Founder Erin McKean gave an overview of the universe of OpenAPI tooling and showed how to go from zero to OpenAPI spec in just a few straight-forward steps! Watch the talk recording to hear about all the amazing things that you can do with an OpenAPI spec.

Sign Up for Summit Updates

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Leveraging OpenAPI for Awesome APIs appeared first on KongHQ.

Kubernetes & CNCF Panel: Joseph Jacks, Jonathan Kratter, Christian Posta

$
0
0

Kubernetes and CNCF

Containers enable microservice architectures by making it easy to develop, package and run microservices in an efficient, run-anywhere format. The mutually reinforcing trends of microservices and containerization have fueled the adoption of Kubernetes, which in turn drove the proliferation of a cloud native tooling and the formation of the Cloud Native Computing Foundation.

During Kong Summit, Kong CTO and Co-Founder Marco Palladino discussed the history and future of containers, Kubernetes and the CNCF with industry thought leaders Joseph Jacks (CEO, KubeCon), Jonathan Kratter (Senior Systems Engineer, Uber) and Christian Posta (Chief Architect, Red Hat). Hear what these industry experts had to say about the state of cloud native technology in the recording.

Sign Up for Summit Updates

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Kubernetes & CNCF Panel: Joseph Jacks, Jonathan Kratter, Christian Posta appeared first on KongHQ.


Chaos Engineering with Kolton Andrus

$
0
0

What is Chaos Engineering, and What Isn’t It?

Breaking things on purpose can be a great way to understand the resiliency of your systems. During Kong Summit, Gremlin Co-Founder & CEO Kolton Andrus detailed why intentionally introducing chaos into your environments can better prepare you for real-life scenarios. Hear about his definition of chaos engineering and his scientific approach to finding weaknesses in distributed systems.

Sign Up for Summit Updates

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Chaos Engineering with Kolton Andrus appeared first on KongHQ.

Holy Hacktoberfest! The Kong Community Delivers

$
0
0

Wow, what a month of Hacktoberfest! The Kong community showed up in a big way to improve one of the most essential aspects of Kong — our documentation (docs) website!

Earlier this year, I spent a lot of time working with our docs site code and content to get the Kong Hub launched and ended up as the volunteer docs site maintainer — at least until we hire a technical writing lead (we are hiring!). Here at Kong, we are moving fast, and we’re grateful for our strongest asset: our large and growing community of users and supporters!

As Hacktoberfest approached, that community was on my mind. I hoped we might get a few community contributions to improve the functionality and content of our docs site, so I spent some time in September and early October making sure our docs site repository was welcoming to contributors — most notably, I cleaned up our issues and open pull requests (PRs), and I labeled a number of issues “good first issue.”

As October proceeded, we started getting PRs, and they just kept coming and coming — wow, what a month it was! We had way, way, WAY more pull requests, active issues and authors in October 2018 than we’ve had any month since the Kong docs site went live more than three years ago. Fifty-nine PRs by 38 people — that’s a lot of hacking! Here is a selection of the terrific contributions received during Hacktoberfest:

Wait, What Version am I on?!

Long ago, we added a drop-down menu to our docs site to allow visitors to pick the version of Kong or Kong Enterprise they were using, but it was still hard to tell what version you had selected while looking at the menu. Josiah Dahl made a big improvement by adding a little blue indicator reminding you which version is currently selected. Thank you, Josiah!

Check out that awesome blue indicator bar!

Flipping a Table… of Contents!

A Table of Contents (ToC) at the top of a given documentation page can help a reader get a preview of what they’ll learn on a page and help them quickly reach the section relevant to them. Since the start of the Kong docs site, we had been adding ToCs to some pages “by hand.” Each ToC needed to be carefully coded to link to the headings within the page — if the headings changed, the ToC needed editing too. Adding ToCs to new pages was also a totally manual process.

We knew there was a better way, and the community came together to help us implement it.

First, Vladimir Jimenez (the author of the awesome Jekyll Pure Liquid Table of Contents) contributed a new feature that auto-generated a ToC for each page as the docs site was being built.

While working on that, we realized we really needed to upgrade our Markdown rendering engine from an old version of redcarpet to a modern version of the now-default kramdown engine. That upgrade was tackled by Vividh Chandna and added automatic handling of what would otherwise be duplicate — and broken — ToC links to headings with the same name.

Once we had automatic ToCs that rendered correctly, we needed to remove all the old “manual” ToCs and needed to adjust heading levels on existing pages to ensure the ToCs auto-generated correctly. I took care of those fixes in our recent doc versions so that our helpful community members didn’t need to do this tedious work, but I didn’t have time to fix ALL the older versions of Kong’s docs. Casper Freksen came to the rescue and updated 211 (!) different docs pages to correct their heading levels, thus allowing automatic ToCs to render correctly.

With ToCs now on every page, users have a reason to scroll from the bottom of the page back up to the top. Vinayak kindly added “Back to ToC” links that take the reader from the bottom of the page back up to the ToC.

Wow, what a team effort! Huge thank you’s to Vladimir, Vividh, Vinayak and Casper!

So Good, we had to say it twice Let’s Just Say it Once

The heading area of our docs site pages has evolved over the years and had accumulated some redundancies along the way. Ting Lew made a terrific contribution to refactor the page headers for clarity and simplicity, with a couple tweaks from our web developer, Adam Kuhn. Thank you!

 

Before: Redundancy everywhere!

 

After: So fresh and so clean!

Consistently Consistent

As our docs site has grown, we haven’t always kept our styling consistent. For example, we discovered that numbered lists looked different on different parts of our site:

Some numbered lists had special formatting, but some did not.

Daniel Martin Gonzalez made the numbered list styling consistent across the site! Thanks, Daniel!

Code or a Link…Why Not Both?!

Sometimes we hyperlink code-formatted text in the documentation – but our CSS styling didn’t render these code examples as links, so it wasn’t apparent you could click on them. Josiah Dahl came through once again with a fix that made it visually apparent that such code examples could be clicked by site visitors – thanks!!

Don’t Copy that Prompt!

When copy/pasting commands from the docs to a terminal session, it was previously easy to copy the command prompt $ mistakenly. Triple-clicking on the command would also select the $, thus forcing docs site users to carefully select the commands “by hand” or paste their copy, then remove the $ by hand.

Our own Darren Jennings fixed the problem for all code snippets that were marked as bash in the markdown files, but some snippets weren’t marked. Casper Freksen helped out again and added markers for all of our code snippets, clearing the way for Darren’s PR and allowing users to copy commands straight from the docs. Thank you, Casper!

Selecting the code no longer selects the prompt.

What…and When

We’ve been publishing changelogs for both Kong and Kong Enterprise for a long time, but for some reason, we weren’t including release dates in the Kong Enterprise changelog. Well, thanks to Alex Ashley, there are now dates on the enterprise changelogs! This improvement didn’t require any code changes, but determining the release dates for past Kong Enterprise versions took sleuthing — thanks for being a great detective, Alex!

Kong Nation to Kong Docs

Our active discussion forum, Kong Nation, is popular with Kong community members and Kong Inc. staff – and in many cases, questions in Kong Nation indicate the need to add information to Kong’s documentation. Community member Palash Nigam stepped up and added some detail about log levels that had previously only been explained in a Kong Nation post. I hope we see more of these contributions that bring Kong Nation and our docs site closer together to better help Kong users. Thanks, Palash!

Many Hands Make Light Work

What an incredible month of collaboration and contribution Hacktoberfest 2018 was for the Kong community. The Kong docs site benefits all our potential, current and future users, and contributions to improving it are welcome at any time, not just during October.

I look forward to all the great things to come from our community as together we make Kong better every day.

The post Holy Hacktoberfest! The Kong Community Delivers appeared first on KongHQ.

Meet Kong at AWS re:Invent 2018!

$
0
0

AWS re:Invent 2018 kicks off in Las Vegas next week! Each year, the AWS community gets together to discuss AWS core topics and other AWS emerging technologies, including databases, analytics and big data, security and compliance, enterprise, machine learning, and computing. This year, it’ll be even bigger and better!

Kong will be at Booth #714 in The Expo at the Venetian/Sands Expo Convention Center. Visit us at the booth to learn more about Kong’s API platform and how it can help your organization in the journey to adopting microservice, cloud native, service mesh and serverless architectures. To schedule a one-on-one meeting with a member of our team to see how Kong works first-hand and discuss how it can work with your API strategy, email us at brigitte@konghq.com.

We’ll also be giving out some fun giveaways throughout AWS re:Invent. Drop by our booth to play the Kong slot machine for a chance to win cool stuff!

See you in Vegas!

Kong at AWS re:Invent 2018 - Booth 714

The post Meet Kong at AWS re:Invent 2018! appeared first on KongHQ.

Defining Service Mesh

$
0
0

This post will be the first in a series explaining the architecture and use of Kong’s service mesh deployment. Having a good, shared definition of the term “service mesh” is important as we dig deeper into Kong’s service mesh capabilities specifically, and this post lays that foundation. We’ll be publishing more blog posts on Kong’s service mesh features soon – stay tuned.

What is a Service Mesh?

So, what is a service mesh? The term means different things to different people. Let’s first clarify that service mesh is not a product or a feature – it’s a new pattern for inter-service communication. Service mesh isn’t a binary characteristic that is either “present” or “absent” – you can implement it incrementally and probably should. We can’t define that exact moment when a “not service mesh” becomes “service mesh.” Here at Kong, we think of the term “service mesh” in a specific way.

Service mesh is a way of solving security, reliability and observability problems that occur when multiple services communicate with each other within a given computing environment. It does this by routing inter-service communications through local proxies, without requiring changes to the applications themselves.

The Details

Let’s break apart that statement to add clarity:

  • …a way of solving security, reliability and observability problems… It is important to remember that we are going to the effort of implementing a service mesh because we have specific security, reliability and observability problems that we need to solve. Note that we haven’t yet enumerated which types of security, reliability and observability problems a service mesh can help solve. A service mesh is not the only way to solve these sorts of problems – though it can be the best way for many situations.

 

  • …that occur when multiple services… The type of problems we are solving are typically encountered when there are a multitude of services. These can include monoliths, mini-services, microservices or serverless functions that are communicating with one another. If we don’t have a multitude of services or those services aren’t communicating with one another, then a service mesh is not going to help solve the types of problems we have.

 

  • …within a given computing environment… Deploying and managing a service mesh requires that a given company, department or engineering team have the authority and access necessary to deploy and manage local proxy code on all the hosts that are running services in the mesh. The “mesh managers” need access to configure certain aspects of the hosts themselves. If your applications communicate only with third-party APIs (and not with each other) or with applications whose hosts are outside your sphere of control, you won’t be able to install, configure and benefit from the elements necessary to bring all those APIs and remote applications into a helpful service mesh.

 

  • …inter-service communications… Service mesh solves problems that arise when services communicate with one another. There is a whole separate class of problems related to the security, reliability and observability of the processes and communications that happen within a given service that service mesh doesn’t help to solve. However, one way to solve intra-service problems in a monolithic application is to refactor it into mini-services and microservices. Once you break apart a big application into multiple smaller services, you move problems from being intra-application to inter-application, and service mesh can then help to solve them.

 

  • …through local proxies… In a service mesh, a proxy runs on the same host as each service in the mesh. These proxies act as “choke points” where the proxy can enforce security policies, enhance reliability (with circuit breakers, health checks, rate limiting, retries, load balancing, etc) and collect telemetry, logs or tracing data. When we say “service mesh,” we mean “only local proxies.”

 

  • …without requiring changes to the applications. Theoretically, you could build all the functionality described above into each service separately. If you could change every service in your application, it’s possible that you could solve the problems like a service mesh does, without using any local proxies. However, this would introduce other problems: different implementations for each service, no way to enforce policies or the need to update every service to make small changes to how they all communicate. This would be a larger effort than implementing a service mesh and would become more difficult as the services themselves changed and multiplied. Service mesh is popular because it makes applications more secure, reliable and observable, all without requiring changes to every service or coordination between the teams that build them.

Next up, how to:

Now that we have a shared definition for the term “service mesh,” we can discuss how to incrementally deploy Kong’s service mesh capabilities to improve observability, reliability and security related to your application. This will be the topic for the next post. Stay tuned!

Want to learn more about service mesh?

Find us at KubeCon + CloudNativeCon North America next week at Booth S33!

The post Defining Service Mesh appeared first on KongHQ.

We’re excited to see you at KubeCon!

$
0
0

Find us at KubeCon!

Kong is headed to KubeCon + CloudNativeCon North America 2018 in Seattle next week, and we would love to see you there! If you’re planning on going, please come to visit us at Booth S33 to discuss the future of cloud native computing.

Our engineers will be hanging out at the booth to talk about Kong’s Kubernetes Ingress controller, Helm chart, the new ability to deploy Kong as a service mesh, and our integration with other cloud native technologies like Zipkin, Prometheus and serverless platforms. We’ll be giving out some amazing swag and raffling off cool prizes (including a few Sonos speakers). Make sure to stop by for a chance to win!

KubeCon is all about community, and we’re really excited to see you there! If you have any questions about the conference or want more details on how to find us, please feel free to chime in on our Kong Nation forum thread about the conference.

See you at KubeCon!

The post We’re excited to see you at KubeCon! appeared first on KongHQ.

To Microservices and Back Again: Insights from Both Sides of Digital Transformation

$
0
0

For the last few years, microservices have been gaining popularity as the software architecture pattern of the day. But even as enterprises grapple with how they can undergo “digital transformation,” some startups are looking back to their monolithic roots.

Software Engineer Alexandra Noonan topped Hacker News in July with a blog post about Segment’s journey to microservices and back again.

During Kong Summit, CTO Marco Paladino sat down with Alexandra to discuss Segment’s adoption of microservices and its move back to a monolith. Alexandra outlined the unique insights Segment gained into the principles of software architecture along the way.

Sign Up for Summit Updates

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post To Microservices and Back Again: Insights from Both Sides of Digital Transformation appeared first on KongHQ.

Kong Enterprise 0.34 Released!

$
0
0

 

Today we’re excited to announce one of our most feature-rich releases of Kong Enterprise to date – Kong Enterprise 0.34. We’ve made huge feature additions and updates to some of your favorite Kong Enterprise tools, as well as several minor fixes to improve your experience.

Below, we’ll dive into some of the biggest changes that we’ve made in this release. We’ll highlight why you should be excited to start your Kong Enterprise journey or upgrade your existing deployment. Be sure to check out the changelog for the full details. Happy Konging.

 

What’s New?

Kong Manager

Kong Enterprise now ships with Kong Manager, a vastly improved UI to manage your teams and services. With 0.34 Kong Manager now incorporates robust features to help you run at scale, including:

  • A new navigation system to streamline movement between workspaces, and across sections within a workspace
  • Enhanced Super admin controls to create workspaces, assign users, and grant permissions globally across all workspaces or at an individual level
  • Auto-configure Kong Enterprise via Swagger file
  • Enhanced Vitals now on by default, available per workspace as well as globally
  • Eagle-eye view for an at-a-glance look at service volume per workspace, as well as health data across cluster
  • Vitals also extensible to common 3rd party tools, including Prometheus, StatsD, and InfluxDB

Workspaces

Kong Manager now includes Workspaces, where you can organize your Kong Enterprise implementation for scale across teams. Key features include:

  • Simplify team management by creating default roles for each workspace, and custom roles as needed
  • Easily create new users, assign them to workspaces, and even brand with colors/images if needed
  • Organize your services, plugins, consumer management, and more according to your exact specification
  • Use Enhanced RBAC to restrict your teams’ access to see only see what they need to, and only what they should.
  • Appoint workspace-specific admins to maintain governance while avoiding becoming bottle-necked by a single administrator.

 

Dev Portal

The Kong Dev Portal is now more tightly integrated with Kong Manager, allowing you to create an individual Dev Portal for each Workspace and manage your developers through a single pane of glass. Key features include:

  • Easily create a Developer Portal for each one of your teams and control it via Workspace
  • Restrict which services are visible within specific dev portals through the Kong Manager Interface
  • Better organize your documentation into specific dev portals
  • Control differences in implementation of different dev portals (enable/disable, auth, etc) from within Kong Manager
  • Use global search to help your developers more easily find what they need across all your services, or within a specific dev portal

 

 

Other Important Additions

  • HTTPS support for Forward Proxy plugin
  • HTTPS support for health checks
  • Admin API Audit Log now available
  • Request Termination plugin improvements
  • Logging plugins log workspace info about requests
  • API Endpoint to allow auto-configuration of Routes/Services by OAPI spec

 

For full list of fixes and optimizations, see 0.34 changelog.

 

 

 

The post Kong Enterprise 0.34 Released! appeared first on KongHQ.


Announcing Kong Cloud

$
0
0

Today at KubeCon, we announced the launch of Kong Cloud – a fully managed version of Kong Enterprise designed to accelerate large organizations’ digital transformation initiatives. With Kong Cloud, customers can instantly start building cloud native services and connect all their services across different environments, vendors and platforms.

By 2022, IDC predicts that “90% of all new apps will feature microservice architectures” and that “35% of all production apps will be cloud native.” This seismic shift imposes new API requirements, and leveraging an API platform that can support modern enterprise architectures is now mission critical. To address this, Kong Cloud provides large organizations with a frictionless way to rapidly adopt the Kong Enterprise platform at scale.

Along with Kong Enterprise’s robust and battle-tested functionality, Kong Cloud brings several operational and strategic benefits for large organizations. Below, we highlight some of the core advantages of using Kong Cloud:

Achieve Cloud-Scale

Start building cloud native applications instantly. Localize services within region to minimize latency and maximize uptime.

Manage Hands-Free

Unleash the full power of Kong Enterprise without impacting development teams. Get the latest product features and functionality with zero disruption.

Unify Services

Bridge the gap between your on-prem and cloud services with zero downtime, regardless of vendor, platform or deployment pattern.

Analyze Performance

Receive daily logs to zero in on performance issues. Understand performance at every level – from a full cluster to a single endpoint.

Smooth Migration

Support gradual migration of workloads to the cloud. Leverage Canary Releases to reduce migration risk and incrementally shift traffic mix across workloads in different environments.

Unlock Multi-Cloud

Connect services across all major cloud vendors. Take advantage of the best tools for each use case and avoid lock-in.

 

 

If you’re ready to stop managing APIs and start focusing on building services, check out our webinar on Kong Cloud or get in touch today!

The post Announcing Kong Cloud appeared first on KongHQ.

Kong Gives Back to Local Nonprofits for the Holidays

$
0
0

With the holiday season upon us, it is not only a great time to look back on all of the hard work and accomplishments of our Kongers from the past year but also reflect on how we can share what we’ve been given with our community. With that spirit in mind, for our volunteer event this quarter, we decided to partake in a holiday gift drive that would benefit two local Bay Area nonprofits that directly serve those most at risk in our city.

The first was facilitated by the Ecumenical Hunger Program, which offers material help, support services and advocacy to families and individuals experiencing economic and personal hardship here in the Bay. Kong participated in the organization’s annual Family Sharing Program, in which we “adopted” a family in need and fulfilled a wish list of items that would help make the holiday season that much more special for the family. Thanks to the generous and enthusiastic support from Kong leadership, Kong matched dollar for dollar the amount raised by our employees! As a result, we were able to raise more than $3,000 to provide for this family.

Kong Gives Back - Ecumenical Hunger Program

The second drive Kong participated in was facilitated by La Casa de la Madres, which offers a continuum of comprehensive and empowering services to women, teens and children exposed to and at risk of abuse here in the Bay. Kong collected pajama sets and slippers for the 150+ women that La Casa serves in supporting housing sites all over the city.

Kong Gives Back - La Casa de la Madres

While both drives touch on all of our core values, two in particular stick out to me:

Real — Across all levels of our organization, Kongers were committed to doing their part in giving back to those who have not received the same opportunities as them.

Unstoppable — Once we as a company committed to these organizations, we went above and beyond our goal and did not stop just because it had been met. I’m proud to work at a company that thinks outside of itself and realizes the importance of corporate social responsibility.

Happy Holidays! Stayed tuned to read about our next volunteer event.

The post Kong Gives Back to Local Nonprofits for the Holidays appeared first on KongHQ.

Kong 1.0 GA

$
0
0

Today, we’re thrilled to announce the general availability of Kong 1.0  a scalable, fast, open source Microservice API Gateway built to manage, secure and connect hybrid and cloud-native architectures. Kong runs in front of any service and is extended through plugins including authentication, traffic control, observability and more. 

By releasing 1.0 we are making a promise of backward compatibility moving forward. With years of development and thousands of production users, we added significant features and tons of fixes that makes Kong faster, more flexible and resilient, including:

Service Mesh

In 1.0, users can now deploy Kong not only as an API gateway but also as a standalone service-mesh proxy. Kong plugins provide key functionality for service mesh out of the box and integrations with other cloud-native technologies including Prometheus, Zipkin, health checks, canary, blue-green and much more.

Mutual TLS (mTLS) and TCP

In 1.0, the Kong cluster creates a Certificate Authority which Kong nodes can use to establish mutual TLS authentication with each other. Additionally, Kong can now route raw TCP traffic which means Kong can now balance traffic from mail servers and other TCP-based applications, all the way from L7 to L4.

gRPC

Kong 1.0 now supports gRPC protocol in addition to REST. Built on top of HTTP/2, gRPC support provides another option for Kong users looking to connect east-west traffic with low overhead and latency. This is particularly helpful in enabling Kong users to open more mesh deployments in hybrid environments.

New Migrations Framework

Kong 1.0 introduces a new Database Abstraction Object (DAO), which eases migrations from one database schema to another with near-zero downtime. The new DAO allows users to upgrade their Kong cluster all at once, without requiring manual intervention to upgrade each node.

Plugin Development Kit (PDK)

The PDK is a set of Lua functions and variables that can be used by custom-plugins to implement their own logic on Kong. Though it was released in 0.14.0, changes in 1.0 fulfill the promise that plugins built with the PDK will be compatible with Kong versions 1.0 and higher.

100+ Features and Fixes

You can find the full list of changes in all Kong releases in the Changelog. There are a number of breaking changes in this release, so please be sure to read the suggested upgrade path for 1.0.

The Future of Kong

Although today we make the promise that Kong is stable and backward compatible, Kong is far from done. We’re excited to continue building the project and the community with you! We’re extremely grateful to our community for all of the support in getting us to this milestone, and we look forward to continuing to build and grow together. 

 

Try Kong 1.0, and let us know what you think.

More…

Since we open sourced Kong four years ago, hundreds of Kong contributors across the world have made countless improvements to the Kong codebase that have brought us to our 1.0 release. Before we get into the new features and fixes in 1.0, we need to first thank the incredible community of Kong users and contributors that got us here. When we first open sourced Kong we could not have imagined the amazing community that has grown around it, and that continues to grow today!

“Kong was built with the vision of a hybrid world in mind, and Kong 1.0 represents a critical step towards that vision. Together with our Community, we’ve made key changes to the architecture of the platform, including the ability to support service mesh, that will give our users the ability to handle any deployment across vendors, environments, and ecosystems. Moving forward, be assured that we’re deepening our commitment to support even more emerging ecosystems”, – Kong CTO, Marco Palladino.

Now, we’ll take a deeper look at some of our new features in 1.0 and how they can help our Community.

 

Service Mesh Support

In 1.0, users can now deploy Kong as a standalone service mesh. With sidecar proxies offering greater visibility, security and resiliency, a service mesh can help address some of the challenges incumbent with microservices. In addition to these benefits, Kong’s service mesh provides a few key advantages compared to other vendors, all of which stem from our injecting the same runtime we use at the edge as a sidecar proxy.

Start Instantly

With Kong you can instantly extend the same functionality you use at the edge into a mesh. Easily move services into a mesh at your own pace, by deploying Kong onto the same host as the container running your service.

Seamless Connections

Connect services in your mesh to services across all environments, platforms, and vendors. Use Kong to bridge the gap between cloud-native design and traditional architecture patterns without changing your services’ code.

Robust Plugin Library

Our plugin architecture offers users unparalleled flexibility. Kong plugins provide key functionality out of the box and supporting seamless integrations with other cloud-native technologies including Prometheus, Zipkin, and many others. Plugins run locally with each service rather than requiring an extra network hop to another component for complex requests.

Low Latency

Kong and its plugins are optimized for performance. Other platforms may introduce latency between services in a container or mesh, we introduce less than a millisecond of delay.

gRPC Support

Kong 1.0 now supports gRPC protocols in addition to REST, for Kong users looking to build high-performance APIs while minimizing overhead. 

Mutual TLS and Support for TCP

Kong’s support for service mesh is enabled by the addition of mutual Transport Layer Security (TLS) between Kong instances, and modifications to the plugin run loop. These changes allow Kong to be deployed alongside each instance of a service, brokering information between services and automatically scaling as those services scale. The Kong cluster creates a Certificate Authority which Kong nodes can use to establish mutual TLS authentication with each other.

As a result of the new mTLS support, Kong’s core router can now route raw TCP traffic. This means that you can now use Kong to balance traffic from mail servers and other TCP based applications.

Separation of Data Plane and Control Plane

Kong 1.0 allows users to specify separate control and data planes in their Kong configurations. Previously, you needed to configure each cluster’s data and control planes separately, now you can make a change in a centralized location that will be reflected across multiple Kong clusters. Separate data and control plane configurations allow Kong users to better control large deployments. The separation also makes deployments more secure by allowing you to protect Kong’s configuration behind a firewall and only expose the data plane.

New Migrations Framework

Kong 1.0 introduces a new Database Abstraction Object (DAO), which eases migrations from one database schema to another with minimal to zero downtime. The new DAO allows users to upgrade their Kong cluster all at once, without requiring manual intervention to upgrade each node sequentially.

Plugin Development Kit

One of our reasons for labeling Kong 1.0 now is the Plugin Development Kit (PDK). Extensibility via plugins has been on Kong’s list of design criteria from the very beginning, and the PDK makes building those plugins safe and easy. Though it was released in 0.14.0, changes in 1.0 fulfill the promise that plugins built with the PDK will be compatible with Kong versions 1.0 and higher.

The PDK is a set of Lua functions and variables that can be used by custom-plugins to implement their own logic on Kong. It provides a number of advantages over writing plugins from scratch, these include:

Standardization

All Kong plugins require a standard set of functionality, which the PDK provides out of the box. This both saves plugin developers time and guarantees that plugins written on the PDK will behave similarly to each other (same parsing rules, same errors, etc.) making them easier to use.

Usability

The PDK’s interfaces are easier to use than the bare-bones ngx_lua API. The PDK allows its users to isolate plugin operations such as logging or caching from those of other plugins.

Compatibility

The PDK is semantically versioned to maintain backward compatibility. In the future, plugins will be able to lock the PDK version they depend upon.

Check out the Plugin Development Kit Reference, or check out our deep-dive into the PDK for more!

Runloop Performance Improvements

To ensure that the performance in our data plane exceeds the requirements for services meshes and other decentralized architectures, we made several improvements to our plugin run loop. Below we’ll detail some of these improvements and their impact on performance.

Preread Execution

Plugins can now execute code in the new preread phase. This allows Kong users to improve performance by initializing plugins when the initial TCP connection is made.

Gateway versus Mesh Configuration

All plugins have a new field, run_on, which will control their activation in service mesh and “regular API gateway” mode. This enables more granular control over plugin activity to avoid redundancies and further enhance performance.

AWS Lambda and Azure FaaS

Kong 1.0 also includes substantial improvements to interactions with AWS Lambda and Azure FaaS, including Lambda Proxy Integration and improvements to the Azure Functions plugin to filter out headers disallowed by HTTP/2 when proxying HTTP/1.1 responses to HTTP/2 clients.

 

 

 

The post Kong 1.0 GA appeared first on KongHQ.

Multi-DC, Running at Scale and Yahoo! Japan Case Description

$
0
0

Multi-DC and Running at Scale

Kong’s stateless architecture and lightweight footprint allow it to be deployed in a variety of environments, with few adjustments required for deployment strategies. At Kong Summit, the Kong Cloud team described their experience with deploying a provider-agnostic, globally-available, high performance Kong installation. They analyzed the behavior of Kong both as a request-terminating API gateway, and as a reverse HTTP proxy, demonstrating its ability to capture and transform complex elements of inbound API requests, and deliver them in a reliable way to dynamic API backends across the globe. Watch the talk to understand how Kong can be deployed as an highly-available/global API gateway, best practices in designing edge-tier gateway installations, and best practices for distributing API traffic from a highly-available gateway to upstream traffic handlers.

Then, hear how Yahoo Japan! Accelerates service development by using Kong. Naoya Okada, software engineer at Yahoo! Japan, shares why they chose Kong for their API Gateway, their multi-DC platform architecture with Kong, current use cases, and future efforts.

 

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

The post Multi-DC, Running at Scale and Yahoo! Japan Case Description appeared first on KongHQ.

Kong with Terraform: A Field of Dreams

$
0
0

Build It and They Will Come

During the Kong Summit in September Dennis Kelly, Senior DevOps engineer, explained how Kong became a core service—and an integral part of the architecture—across brands at Zillow Group. Starting out with a single use case for Kong Community Edition, Zillow advanced to proxying production workloads at scale with Enterprise Edition, automating deployments with Terraform. Kong’s power and flexibility fueled its explosive adoption at Zillow. This talk will give you the tools to set up your own enterprise-ready Kong clusters in Amazon Web Services (AWS) with minimal time and effort by leveraging Infrastructure as Code (IoC), creating a field of dreams for building your products.

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

Full Transcript

Dennis Kelly, I’m a Senior DevOps Engineer for Zillow, and a lead on our API strategies. I’m also an AWS Certified Solutions Architect Professional, so I have a little bit of experience with AWS as well.
Today, I’d like to talk about a number of different things. I was looking at our story at Zillow Group and how Kong has evolved. And it really came down to, it’s like we built Kong and then we had just this explosive adoption because Kong was just there, and so it was like the movie “The Field of Dreams”. You build the field and they will come.

And so today, the agenda is: we’re going to give a brief introduction to Kong and what was attractive to us about it. We’ll talk about Kong at Zillow Group, the evolution there, and we’ll go over our architecture that we use in AWS, and introduce infrastructure as code with Terraform for how do we deploy our clusters, then close with some thank you’s and some time for Q&A.

So what is Kong? Everyone knows hopefully by now throughout the Kong Summit, that it’s an API gateway. It really is just a proxy between clients and your API. If you think about going to the bar with your friend, it’s your local bar. He’s coming in with you. It’s like “Oh, let’s order some Manhattan’s. I’m like, “No, wait, I got this bro,” because you know the bartender. So the bartender is our microservice on the back end. You’re the client wanting to request something. I give him the wink. He comes over, bypasses the beautiful women that are also waiting in line for a drink. I’m like, “I need two Manhattans, straight up with a twist.” So you could have ordered that yourself, but you may have been waiting a little bit longer. You may have not gotten the response that you wanted from the bartender and so that guy in the middle facilitated the request, gave us some quality of service.

The beauty of Kong is with its extensible API. We can add a lot of functionality there, as utility features into the microservice architecture. You look at the server itself, it’s built on Nginx OpenResty and then the Kong server itself. We’ll go into a little bit of detail here about what that is. So Nginx is an extremely strongly powerful web server, very high performance–powers over 400 million websites. And so if you look at that as an open source project itself and the community behind it, it’s very attractive. OpenResty, which integrates Nginx with the Lua Just-In-Time Compiler, basically provides you an easy-to-use and scalable dynamic web gateway. That’s what Kong uses to build itself on top of it. And then with that, Kong has its own plugin architecture to where you can also extend the functionality of it. It’s highly extensible, scalable, and restful, making it a great pattern for infrastructure as code–and also platform agnostic, which is a great benefit for us given the different types of architectures that we use.

So Kong came into the picture at Zillow Group when we were looking at sharing APIs between our different brands. Zillow Group is actually composed of brands like Zillow, Trulia, Hot Pads, StreetEasy, you can see them down there at the bottom of the slides. Anyways. We have these development groups wanting to come in and share their APIs and they’re like already amped up and ready to go. It’s like, “Oh, let’s just set up a VPN tunnel between our two data centers and then we’ll start sharing that API.” Then the next group comes along like, “VPC peering.” like … “Oh, we already set up this as a public API.”

You can see the headaches already starting to form with the operation teams. It’s like, “Okay, let’s pump the brakes here for a second.” Those are obviously old and busted ways. They’re not going to be a consistent pattern. It’s not going to be scalable for the future, not secure. So we came up with some tenants for what I call the new hotness. We wanted to build a service that could be consumed by all of our different brands. That way when we’d look at all the different architectures and data centers that we had, we needed something that would work in each one of those.

We wanted that to be consistent and secure for the microservices as well. And we were looking for something that was standards based, and also quick and easy onboarding. I think that really translated into a story about this needing to be completely transparent to our development teams because we wouldn’t want them to go back in and have to refactor a ton of stuff in order for Kong to work. That was again, one of the big attractions to Kong: we could abstract a lot of that stuff into Kong, unify a lot of the functionality into one spot and then not have to be dealing with, “Oh, we found a security bug in this utility microservice communication package that we’ve built.” Trying to get teams to upgrade that in a consistent way would be a nightmare.

So this is where Kong came aboard. Working with teams down at Trulia, we came up with this architecture for sharing our microservices using Kong. At Zillow, we have these things called brain dumps. They happen every Tuesday, where you’re introducing new concepts and new services to the company. And so, I presented on Kong on Tuesday, August 15th, 2017, a little over a year ago. All of a sudden that’s when Kong blew up. Week 2, I had meetings booked out for weeks in advance, basically taking up three quarters of my time. I had 40 Jira tickets and two weeks of people requesting Kong. It was a pretty overwhelming–thank God for PMs. Right?

As Kong was hitting the water cooler talk, it was starting to gain a lot of momentum. It was like “Okay, we’re sharing APIs between these brands and we hear about all these other cool features of Kong.” And so it was like:
“Can I? Can we do public APIs?” Well, yes. Yes, you can.
“We want to do some cores with that as well.” Yes. Yes, you can.
“Rate limiting?” Yes, you can.
And so then all of a sudden playing in the back of my mind was that song by Tribe Called Quest, Can I Kick It? Yes, you can.
“East/west authentication.” Yes you can.
And now I’m feeling like the Kong guru at Zillow. It’s like, “Can I?” Yes, you can. I just wanted to have the tape on a big old boom box. Just ready to go for any time someone came up to me and was “Can I do this with Kong?” And so like the last one was lambda. Yes you can. So can you kick it? Can you Kong it? You absolutely can.

So I’m going to get into a couple of specific use cases that I had introduced in that last side. One was our east to west authentication. When we think of Kong API gateway, a lot of that is north and south, and that is basically data coming in and out of your data center. East, west is that traffic within it. So we had a specific service, it’s an email subscription based service that manages a large number of campaigns and people subscriptions to those campaigns, and they were definitely concerned about the pattern of, oh, hey, I need access to this, I’m going to go look at this other service, copy and paste code from it, and then my stuff is up and running and then all of a sudden you have these inherent consumers that you’ve onboarded and not known about it. Because email can be such a tricky and very spammy thing, they didn’t want anyone just having access to it that and they really wanted to control access within that service to specific endpoint.

So if I’m creating a campaign for my specific microservice, you’re going to be limited to the scope of that particular campaign. And so they came to us with this potential opportunity. We decided that we’ll create an API endpoint, for each service route. We then had a one to one relationship for each API endpoint with a white list group. And then for each of the microservice consumers that we onboarded, we created a consumer for them, and then added them to each of the groups for the end points that they needed access to. And then, for the API keys we use our own version of Vault for escrowing those values. So the service owners themselves don’t even have access to them. Those get substantiated on deploy of the application.

And then came along caching. We had an old service that was a struggling with the current load of our website. It was a service that was hit for every home detail page. So basically when you go to Zillow.com and you’re looking at a specific property, there were property attributes that were being loaded from this service that we just couldn’t keep up with the current load, it was an older one that was tied to SQL server and they were thinking about browning out the service until they could build the replacement using DynamoDB. Then they came to us, it’s like, “Let’s do some caching.” And initially it was Squid and Varnish and our ops team was like, “We don’t want to get into maintaining this” because when the development teams come to you and say six months, yeah, it’s going to be done in a year.

And this was at the time we were starting to evaluate a Kong enterprise because we are really starting to ramp up our workloads to enterprise levels and it was becoming a core services are. And so not only from the support perspective, but looking at this caching plugin, we went into an evaluation of Kong enterprise and found that this was going to be a great solution for us because then we’re not introducing new technology that we had to maintain. We were already had that Kong infrastructure. We’d already built that field of dreams. And so onboarding this was very easy. That, And we looked at the complications of caching with other solutions. Having Redis a backend where we’re warming the cache for every single node at the same time. And not having to do that on an individual instance basis was a really powerful advantage for us.

And so this is when Kong enterprise into production, we had looked at what we do the amount of data that we wanted to cache in order for the service to be healthy. We ended up sizing our backend Redis appropriately. We were getting about 70% hit rate on the cache and it brought down our average latency from 25 milliseconds to 4. And we were really impressed with that, having not implemented Kong and for a caching solution at all. So it was, again, really impressive for us.

And so looking back at a lot of our factors of our success, obviously Kong played a big part of that, but then we have the Zillow Group core values that we move fast and we think big in that ZG is a team sport and that we own it.

And so it was really great to see a lot of the different brands come together, embrace this idea, collaborate, build this solution with me. Along with that, it was very complimentary, a lot of the devops principles that we partnered with our customers, our development teams for success that we automate, automate, automate as much as we can, we make things self service, and we do things in a way that allows us to iterate quickly.

And then again, the power and flexibility of Kong just really opened up a lot of doors for us. And then Kong just being there caused people to really think differently about how we were doing things. And then lastly, there were a lot of features of AWS that we took advantage of in order to scale out to enterprise workloads. And so with AWS we ended up leveraging a lot of their best practices. So in terms of high availability, in each region they offer multiple availability zones, and these are basically separate data centers within a geographic region that give you redundant everything at every level. And it’s very important to leverage multiple AZs because if you’ve ever used AWS, some of those go down sometimes.

Then the ability to elastically scale and I think a lot of people when they think of scaling, it’s just upward and onward where it’s like I’m only ever going to be adding more. And I think one of the important tenants in the cloud is that if you really want to see that AWS savings at the end of the month, you also need to be able to scale down and it’s really important practice to implement. Otherwise, at the end of the month, it’s like, “Why am I spending a million dollars on this? I thought this was going to be cheaper?” It’s like, “No, you need to scale both ways” and it’s just like those sweatpants that you put on here. Here, I’ve been eating well at this conference all week, drinking free drinks. I’m going to expand those sweatpants out, but when I get home and start working out again, they need to still fit and not fall off my ass when I get home.

So scaling up and down, and also scaling horizontally and vertically. So when you’re looking at the database instances, the EC2 instances that you’re using, you want to be able to increase the size of those instances and then you also want to be able to add more instances to scale in both directions and then AWS has a lot of tools out there that help us with the automation process. Then security, even though it’s the last slide, or last item on the slide, it should never be the last thought. It should always be an integral part of anything that you do and also realizing that in AWS it’s a shared responsibility with you and AWS, that you should be leveraging a least privileged model that you only introduce permissions and access as they’re needed and then using security as code will allow you to make sure that your policies are enforced.

And so we were looking at our AWS resources for Kong. We went with PostgreSQL. Well, we just have in house experience with it. We’re very comfortable with it. We didn’t have any Kasandra before, and for the way that we wanted to manage and scale Kong, it was the right fit, and we went with Aurora because again, of the enterprise aspects. You look at RDS versus Aurora, you’re getting the multi-AZ clustered managed auto patched, automated backups, a lot of enterprise features that will help you withstand a disaster.

And then because we were using rate limiting and also the caching we wanted ElastiCache Redis back in, we used elastic cache for that, again, a managed service that scales out has and uses the write and replica technologies and then EC2 Auto Scaling as we added new nodes to the Kong cluster or there was say a hardware failure, in AWS we wanted to be able to replace those nodes or add new nodes as needed.

I think of one really cool thing is that’s often overlooked in AWS is the EC2 parameter store. It’s a great key value, secure string service that you can use to protect your data. And we actually use it for our database passwords, API keys, a lot of the sensitive information that we don’t want sitting out there in a repository or in our Terraform state files.

Again, another important pieces of plastic load balancing have that in multiple AZs to protect your services and scale out. Then CloudWatch, you need to be able to monitor and alert on the health of your services. We used, IAM for our instance profiles on the Kong nodes, so that they can then reach out and get access to various things like to set EC2 parameter store. So we’re not embedding keys in any of the nodes.

And then again, with security groups, least privileged model, everything that we did in AWS was really locked down to the specific things that needed access to it. So our load balancers can talk to a Kong node. Nothing else can talk to a Kong node, you can talk to the load balancer, Kong nodes can talk to the database, and everything is very secure and locked down.

And so this is what our architecture looked like and still looks like. So right in the middle, you’ll see a Kong cluster that has, again, the auto scaling group where the nodes in the cluster can scale out and scale back in as needed. We have an external load balancer to accept connections from the public internet and from other Kang clusters in different data centers. And we only expose that via SSL.

Internally, we have a load balancer that can do both HTTP and HTTPS depending on the service. And then for the Admin Gooey functionality and the enterprise edition, you can also access that via SSL and then the Admin API as well. And so the way we designed this was that we wanted to be completely transparent for the microservices. And so our consumers actually hit a Kong end point in their local VPC that adds the API key for them, forwards that to the remote Kong cluster, it validates the API key as that consumer and then forwards it onto the microservice. So that way, we in operations can seamlessly do key rotation without having to impact a service deploy or have developers reconfigure things.

So in terms of provisioning resources, infrastructure as code is the new buzzword, and it’s a great buzzword in terms of what it actually implies, and so what is it? It’s just machine readable configuration of your data center and this can be a script or a declarative definition of what you want your infrastructure to look like and how it should behave and the benefits to doing it this way is that you can then version it, you can put it in a repository and iterate on that and be able to revert back to a previous state as needed. It’s also shareable and that was very valuable to us at Zillow because we have multiple brands, multiple devops teams, and multiple people provisioning and it’s reusable, In that way, we didn’t have to reinvent the wheel at each step. We were always using the same code together, and then repeatable because even within a devops groups, we were deploying multiple Kong clusters and so we needed a way to ramp up quickly to meet the demands of our customers.

And so at Zillow, Terraform was our way of doing that. There are obviously other tools out there for AWS, but we had already standardized on Terraform and so that’s how we did Kong. And Terraform is just a tool that codifies a APIs into declarative configurations. And you can go to the website there are some of the additional benefits to Terraform is that it is open source just like Kong and has a great community behind it. Another benefit is that it can really manage your complex change sets. And so if you’re looking at introducing say a new resource or modifying an existing resource, Terraform can go look at what’s already existing in your VPC. Compare that to the changes you want to apply and only make those changes. It can also manage resource dependencies. So if you have a security group that depends on another security group, or if you have a EC2 instance that’s going to depend on a database, Terraform can help you easily manage those dependencies to make sure that resources are created, modified in a destroyed in the appropriate order.

So getting started with Terraform, you can just go down to their website, download it for free, it’s available for a number of different platforms, and you can use homebrew on the macOS to get started with that as well. You just unzip the binary, place it into a folder that’s in your path. And here’s some instructions to help you do that. Super easy to do. Then you just verify your installation. Here, I’m on my system called Awesome. I have another one called Booya, Terraform –version, and it will give you what you have currently installed. It’s actually important to take notice of this because Terraform is actually a project that iterates really quickly. They release new versions all the time and it’s important to stay up to date, you actually may find when you start collaborating with other people, if they’ve downloaded version after you and have something that’s more up to date, you’re going to have to upgrade in order to modify the state.

So getting started with Terraform, there are a number of different providers that you can use to basically describe provision change and destroy resources in your environment. And there was actually over 70 officially supported providers, AWS being one of those. It’s been brought up in other talks as well. There’s some community providers as well, and Kong being one of those. So a very flexible and powerful tool.

So Terraform configuration, that’s basically just text files. There is a Terraform format where it has a .tf extension, and this is actually the preferred way to describe your infrastructure because it’s human readable, you can add comments to it and the declarative format for that is HCL for Hashicorp’s Configuration Language. You can actually also do it with json with a tf.json extension. But this is really designed for applications that would be generating the Terraform for you in order to apply.But again, realizing the limitations of json, it’s not as human readable and you’re not going to have the comments.

So configuration semantics for Terraform is it basically looks at all the files in your directory, orders them alphabetically and merges them all together. And so if you say create a definition of a resource in multiple files, you’ll actually get an error on that merge because of the multiple declarations. There is a pattern for overriding. I’ll list that here for further reading on your part, but I’m not going to go into details, just so we can focus on Kong. So yes, Terraform is declarative, the order of a reference in the variables within the files don’t matter. It’s going to merge them all for you. So some basics, typically, you’ll lay out a directory with a main .tf, and this is where you specify your provider. This could be the AWS one where you’re going to give it some credentials, it could be the Kong one where you give it your Admin API token.

Then you define resources. And basically any file name that you want, typically would be it would be redis.tf or aurora.tf to describe the resource that you’re trying to provision. It’s just basically a component of infrastructure. And you can actually have multiple definitions within one file. A data source, typically stored in data.tf basically references an existing piece of infrastructure that wasn’t created within your Terraform directory. It may have been created by someone else in their Terraform directory, but instead of having to statically reference, say a resource ID like an ARN, you can use the data source to have it pull in that information for you, so that if you were to say rebuild a VPC and you get a new ARN, you’re not having to update those static references in each place.

And then variables are basically just parameters that you can specify. Think of any other programming language, even though HCL is not a programming language, it’s the same principle. And then lastly, modules and these are basically a Terraform directory of resources that are encapsulated into one group that you can be reusable.

And so we developed a module for provisioning our Kong clusters in AWS. And I’ve tried to make them pretty low barrier to entry in terms of its prerequisites. Obviously you’re deploying into AWS, you’re going to need a AWS account. We do everything in VPCs and so you’ll have to have your VPC setup. Then you’ll have public and private subnets, ones that are exposed to the Internet and one that are completely internal and this is actually a best practice for AWS so that you only expose things as needed. And we do labeling. We do a lot of tagging in our AWS accounts and this way, again, we can use data sources to reference those without having to say have a static reference to a subnet ID in there. And so for this module, you just have to label your private and a public subnets using the type tag.

And then a default VPC security group. This is going to be for giving you SSH access to the Kong nodes. So it allows you to choose how you want to do that, whether it’s, say a corporate subnet or a bastion host. And then you’re going to need, an SSH key pair for SSH into the Kong nodes, a SSL certificate for the HTTPS on load balancers. And this can actually be different for each one of the SSL endpoints. And then lastly, you just need Terraform, so hopefully very low barrier to get in. You need your AWS account setup and Terraform.

And so happy to announce that we’re releasing this open source project for everyone to share. It’s now Zillow Group/Kang-Terraform on github, and we’ve added it to the con hub that was released earlier today.

And so here’s an example of building your Kong cluster using our module. Pretty easy to do. And so this is the point where it’s like, “Wabam. You got your Kong cluster, I’m selling it on TV.” Like one of those, yeah, “It’s now yours for three easy payments.” And that last one is going to be super complex because don’t all have UCS majors feel robbed for never having using calculus throughout your entire career? So I made that last payment super complex so that you can apply some of that, feel like you get value for that CS degree. And so provisioning with Terraform easy as one, two, three, Terraform init, plan, and apply. And again, wabam, you’ve got your Kong cluster, and this is my Oprah moment. It’s like, “You get a cluster, you get a cluster, he gets a cluster.”

So while that’s actually applying, and setting up all the resources, you actually have to go and do some things. And so if you go into AWS console, into the parameter store, you’re going to want to add a password for your Kong cluster. And basically, you can set it to whatever you want. I don’t have that in any of the Terraform tf files because again, we don’t want that being committed to a repository and then being exposed to people that shouldn’t have it.

And the same thing with the license for EE or AM for the bin tray off. And this is only if you’re doing the EE edition and you can actually do CE and EE with this module. And so the Kong nodes are running minimal Ubuntu and I’m actually a super huge fan of it. Came from the Debian world long ago. But Ubuntu is really a modern operating system that’s built for the cloud. Minimal Ubuntu has a very small footprint. I think it’s under 100 megs. It’s very secure because again, we’re reducing the surface layer and scope of what’s being installed. It’s really fast booting. So I can provision an instance in 90 seconds from scratch. Again, it has an optimized kernel for AWS to give you even better performance.

The Kong service itself is installed for you. It’s supervised under a program called runit or optionally also called by it’s command line tool sv. And basically, this will manage the Kong process for you. So if somehow it crashes or fails, it will restart it for you. We’ve also added a Kong splunk plugin for logging. So a great segue from the previous talk where he’s talking about doing splunk logging, the splunk plugins now released today.

Then automatic local log file rotation. For our declarative management of the API endpoints, we started out with Kongfig, and this was actually before the Terraform provider for Kong was released. And so that also gets installed.

And so some of the cool hacks I saw were interesting when I created this module was how do I do ELB, elastic load balancing health checks? And this actually became the first Kong endpoint. And so you have a slash status on the Admin API. For CE, we didn’t expose our admin API on any of our load balancers. And so I’m like, “I’ll just create a slash status on the Kong gateway that points to the local host status. And so that way I can do health checks.”

Then enterprise version was just a modification of that because if, again, if you look at, can’t really do off when I’m just giving it an HTTP endpoint for our status. So with enterprise, our back is enabled by default and what I did was created to monitor user, with it a token that has access to slash status. Then we create the Kong slash status endpoint and modify it using the request transformer plugin to add the Kong admin token for the monitor user, which is just monitor.

And so once you’ve provisioned your cluster, there are some additional steps you want to take, by default, the root password is just KongChangeMeNow#1. And so you’ll want to log into one of your instances and change it. It’s not too much of a security threat because only the Kong instances themselves have access to that PostgreSQL database, but it’s definitely a good practice. You can then update that root password in the EC2 parameters store. So as you provision new Kong nodes, it has up-to-date a configuration. Also, you’ll want to enable IP white listing on that slash status endpoint so that way you’re not exposing it to the public on your external load balancer. And then for enterprise edition, the default admin user in our back is just zg-kong-2-1. And obviously that’s going to rev with each version, but you’ll then can log into the Admin Gooey and change that.

I’m not sure if the Kong Terraform provider can do our back yet, but Kongfig can’t, and so, we just do that manually through the Gooey. You’ll also then want to update that value in the EC2 parameters store because that admin token can be used for your declarative definitions of Kong endpoints. Some additional features about this plugin is that almost all the settings are tweakable. You can change your EC2 instant sizes, your timeouts, your thresholds, and also resources can be optionally provision. So if you’re not using Redis, you don’t have to enable Redis. Say you have an existing PostgreSQL database that you want to use, you don’t have to provision Aurora.

Then a big thing for us is CloudWatch. And so there are CloudWatch actions that you can define that will little trigger on the various thresholds that can be tweaked.

And so this allows you to send email or send a pager duty if a Kong node goes down, you’re hitting four or five x 100 thresholds. You can also add bastion host access to all the resources since everything is locked down. Say you want to manage the PostgreSQL database outside of that, you can add that to the bastion hosts cider blocks.

Some recommendations for running in production would be to use the C52XLs, this is actually Kong’s recommendation as well. And if you look at it, you’re going to see your host running maybe 3 to 5% CPU and like 800 megs of RAM of the 16 gigs I think it has. And you’re like, “Why am I doing this?” It’s for the networking. If you look at AWS instance sizes, you have to evaluate not only the CPU and memory that you need to provision for, but the network is very important.

If you look at those T2 instances, you’re going to get burstable network speeds that are going to be sporadic and they’re not very a kind for production workloads. And so this, the C52XL gets you into that 10 Gig range, which will scale for a production. Obviously you’re going to want to look at those CloudWatch blogs and then be able to implement auto scaling policies about when you shrink down and when you go big for your production workloads. And then it also allows you to add additional tags to all of the AWS resources. When they get provisioned, they’re going to have a service and description tag and basically when you provision the services, the name of it, it’s going to be zg-kong-2-1- whatever environment you define. So dev, staging, prod. But then you can also go in and add additional tags in the module itself, which will be passed to every single resource that gets created. And this will help you with your auditing and billing.

Then highly recommend you send those Kong logs to a remote endpoint. Mentioned splunk and that’s what we use. And it really enables you and developers to have visibility into the health of the cluster and the health of the application. And so our teams heavily rely on this to look at and monitor and alert on the health of their application. We use it to do the same for the clusters. And one really cool thing is when we release the splunk plugin at Zillow, everyone immediately went in and started looking at like latencies and they were just blown away. And I’m really impressed with how performance Kong really is a to see an average latency is zero milliseconds on processing and a really low p99, I think it was like 4, it was impressive and it just boosted everyone’s confidence in Kong

Then lastly, we talked about API key management, and because we’re again putting this in declarative form to where you don’t know what that key is. We actually use pwjen, it’s a little command line utility for Linux and for Mac, to generate secure passwords so that, that’s what the -s does, is it makes it more secure. You can then specify the length. And so for API keys we use 32. Then the 1 just says give me one password. This way, as you go into the declarative world for your API endpoints, you can have when they give you, say a pull request to create a new end point and there’s an API key that’s supposed to be in there. They can actually just provide you a token, you generate a value for that token and using pwjen, and then you can store it in something like vault or EC2 parameters store and have that automatically inserted into a render config as so applied.

Some thoughts on EPI management, we’re using Kongfig currently. Again, it was because the Terraform provider didn’t exist at the time, but it does have some limitations, that doesn’t support services and routes and the newer versions of Kong. And the development of it has really slowed down. And so we’ve been looking at some alternatives. Terraform I think is going to be the go to in the future because then also puts it into the same declarative language that we use to revision Kong itself. And then there’s also ansible.

Then you can set up policies to make this completely self service so that your customers can then send you a pull request for the end point that they want to create or modify, and then as you merge to master, automatically apply it to the appropriate cluster. So some ideas that I’ve had for the future, again, this is going to increase the the entry into using our Kong cluster, but we’re looking at maybe using a custom AMI, using HashiCorp’s Packer, and this would basically allow us to configure Kong and a lot of the static content before it even boots. And that way we’re only applying database passwords and some of the dynamic stuff configuration as it comes up.

Stats d integrations so that we can get more metrics on the Kong nodes themselves, an additional CloudWatch triggers so that we can alarm on CPU and memory and disk. Then also provide some example auto scaling policies. We’re still in the process of setting that up ourselves. Right now we, we provision however many Kong nodes we think that we need for that given environment and we just keep it statically there. That auto scaling policies replace nodes as needed as say AWS changes hardware underneath us, but we’ve really haven’t had a need to really scale that out yet because Kong is so performant that just haven’t had the need to scale beyond the minimum amount that we feel we need for reliability.

Then we’re also evaluating the PerimeterX plugin, PerimeterX is an enhanced bot protection framework. We use it on our primary website and then to be able to offer that within our Kong service as well would be a great compliment to it. And then instead of having all of our different con clusters talk to each other over the internet, we’re thinking about setting up, say a transit Kong VPC where there’s only Kong in it and we don’t mind peering with all of our different brands because at some level we do trust them and that way we’re getting better performance at the VPC level.

And then with the exciting announcement of 1.0, definitely looking into the service mesh a opportunities there, but even before the Sidecar proxied was released, I was thinking, what if we just got rid of all of our load balancers in front of our service and just had Kong? Because then you would have all of those great features available to you and really basically have a service mesh within your data center without going through refactoring all these things are adding sidecar capabilities even though it’s out in the network and not in the sidecar itself. All those features still exist and are available to you immediately before, say, migrating to Coobernetti’s or changing out your entire stack.

So big thanks to some people at Zillow Group. Toby Roberts, the VP of Operations. Leif Halverson, the Director of Infrastructure who’s my supervisor, were great supporters of our transition to Kong Enterprise and had no problems justifying that expense. Jan Grant, who’s our PM for the project and kept me on a task with all of those Jira tickets, and then my own teams. And, this isn’t just production operations at Zillow because I’m in Seattle, but also the other brands here, I’ve got some guys sitting up here in the front row who very prominent in the implementation of all of this.

Not all the product teams we partner with. It was exciting to do this with a bunch of teams that were eager to onboard and it was just a really fun process. And then all the people at Kong headquarters, you really struggled to find in this industry people that are just so bright, pleasant and fun to work with. Danny, Aaron, Harry on the cloud team like Ben Helves also, our customer success engineer, Travis, I could go on and then all of you Kongers because I think Kong is such an awesome technology, but then to back it up with such a vibrant and cool community, my hat’s off to all you guys.

Thank you.ly came down to it’s like we built Kong and then we had just this explosive adoption because Kong was just there, and so it was like the movie “The Field of Dreams”. You build the field and they will come.

And so today, the agenda is we’re going to give a brief introduction to Kong and really about what was attractive to us about it. We’ll talk about Kang at Zillow Group, kind of the evolution there, and we’ll go over our architecture that we use in AWS and introduce infrastructure as code with Terraform for how do we deploy our clusters, then close with some thank you’s and some time for Q&A.

So what is Kong? Everyone knows, hopefully by now throughout the Kong Summit that it’s an API gateway and it really is just a proxy between clients and your API and so if you think about going to the bar with your friend, it’s your local bar. He’s coming in with you. It’s like “Oh, Let’s order some Manhattan’s. I’m like, “No, wait, I got this bro,” because you know the bartender. So the bartender is our micro service on the back end. You’re the client wanting to request something. I give him the wink. He comes over, bypasses the beautiful women that are also waiting in line for a drink. I’m like, “I need two Manhattan’s, straight up with a twist.” So you could have ordered that yourself, but you may have been waiting a little bit longer. You may have not gotten the response that you wanted from the bartender and so that guy in the middle facilitated the request, gave us some quality of service

The beauty of Kong is with it’s extensible API. We can add a lot of functionality there as like utility features into the microservice architecture. You look at the server itself, it’s built on Nginx OpenResty and then the Kong server itself. We’ll go into a little bit of detail here about what that is. So Nginx is an extremely strongly powerful web server, very high performance, powers over 400 million websites, and so if you look at that as an open source project itself and the community behind it, it’s very attractive. OpenResty, which integrates Nginex with the Lua Just-In-Time Compiler, basically provides you an easy to use and scalable dynamic web gateway and that’s what Kong uses to build itself on top of it. And then with that Kong has its own plugin architecture to where you can also extend the functionality of it, and so it’s highly extensible, scalable, and restful, making it a great pattern for infrastructure as code and also platform agnostic, which is a great benefit for us given the different types of architectures that we use.

So Kong came into the picture at Zillow Group when we were looking at sharing APIs between our different brands and so Zillow Group is actually composed of brands like Zillow, Trulia, Hot Pads, StreetEasy, you can see them down there at the bottom of the slides. Anyways. We have these development groups wanting to come in and share their APIs and they’re like already amped up and ready to go and it’s like, “Oh, let’s just set up a VPN tunnel between our two data centers and then we’ll start sharing that API.” Then the next group comes along like, “VPC peering.” like … “Oh, we already set up this as a public API.”

You can see the headaches already starting to form with the operation teams. It’s like, “Okay, let’s pump the brakes here for a second.” Those are obviously old and busted ways. They’re not going to be a consistent pattern. It’s not going to be scalable for the future, not secure. So we came up with some tenants for what I call the new hotness. We wanted to build a service that could be consumed by all of our different brands, that way when we’d look at all the different architectures and data centers that we had, we needed something that would work in each one of those.

We wanted that to be consistent and secure for the microservices as well. And we were looking for something that was standards based, and also quick and easy onboarding, and I think that really translated into a story about this needed to be completely transparent to our development teams because we wouldn’t want them to go back in and have to refactor a ton of stuff in order for Kong to work. That was again, one of the big attractions to Kong is that, we could abstract a lot of that stuff into the Kong, unify a lot of the functionality into one spot and then not have to be dealing with, “Oh, we found a security bug in this utility microservice communication package that we’ve built.” And then trying to get teams to say upgrade a that in a consistent way would be a nightmare.

So this is where Kong came aboard, worked with teams down at Trulia, we came up with this architecture for sharing our microservices using Kong. At Zillow, we have these things called brain dumps. They happen every Tuesday where you’re introducing new concepts and new services to the company. And so, I presented on Kong on Tuesday, August 15th, 2017, little over a year ago. And then all of a sudden that’s when Kong blew up. Week 2, I had meetings booked out for weeks in advance, basically taking up three quarters of my time. I had 40 Jira tickets and two weeks of people requesting Kong. And so again, it was a pretty overwhelming, thank God for PMs. Right?

And so, as Kong was hitting the water cooler talk, it was starting to gain a lot of momentum. And so it’s like, “Okay, we’re sharing APIs between these brands and we hear about all these other cool features of Kong.” And so it was like, “Can I?” Can we do public APIs? Well, yes. Yes you can. We want to do some cores with that as well. Yes. Yes, you can. Rate limiting. Yes you can. And so then all of a sudden playing in the back of my mind was that song by Tribe Called Quest, Can I Kick It? Yes, you can. East West authentication. Yes you can. And now I’m feeling like the Kong guru at Zillow. It’s like, “Can I?” Yes, you can. I just wanted to have the tape on a big old boom box. Just ready to go for any time someone came up to me and was can I do this with Kong? And so like the last one was lambda. Yes you can. So can you kick it? Can you Kong it? You absolutely can.

So I’m going to get into a couple of specific use cases that I had introduced in that last side. One was our east to west authentication. When we think of Kong API gateway, a lot of that is north and south, and that is basically data coming in and out of your data center. East, west is that traffic within it. So we had a specific service, it’s an email subscription based service that manages a large number of campaigns and people subscriptions to those campaigns, and they were definitely concerned about the pattern of, oh, hey, I need access to this, I’m going to go look at this other service, copy and paste code from it, and then my stuff is up and running and then all of a sudden you have these inherent consumers that you’ve onboarded and not known about it. Because email can be such a tricky and very spammy thing, they didn’t want anyone just having access to it that and they really wanted to control access within that service to specific endpoint.

So if I’m creating a campaign for my specific microservice, you’re going to be limited to the scope of that particular campaign. And so they came to us with this potential opportunity. We decided that we’ll create an API endpoint, for each service route. We then had a one to one relationship for each API endpoint with a white list group. And then for each of the microservice consumers that we onboarded, we created a consumer for them, and then added them to each of the groups for the end points that they needed access to. And then, for the API keys we use our own version of Vault for escrowing those values. So the service owners themselves don’t even have access to them. Those get substantiated on deploy of the application.

And then came along caching. We had an old service that was a struggling with the current load of our website. It was a service that was hit for every home detail page. So basically when you go to Zillow.com and you’re looking at a specific property, there were property attributes that were being loaded from this service that we just couldn’t keep up with the current load, it was an older one that was tied to SQL server and they were thinking about browning out the service until they could build the replacement using DynamoDB. Then they came to us, it’s like, “Let’s do some caching.” And initially it was Squid and Varnish and our ops team was like, “We don’t want to get into maintaining this” because when the development teams come to you and say six months, yeah, it’s going to be done in a year.

And this was at the time we were starting to evaluate Kong Enterprise because we are really starting to ramp up our workloads to enterprise levels and it was becoming a core services are. And so not only from the support perspective, but looking at this caching plugin, we went into an evaluation of Kong enterprise and found that this was going to be a great solution for us because then we’re not introducing new technology that we had to maintain. We were already had that Kong infrastructure. We’d already built that field of dreams. And so onboarding this was very easy. That, And we looked at the complications of caching with other solutions. Having Redis a backend where we’re warming the cache for every single node at the same time. And not having to do that on an individual instance basis was a really powerful advantage for us.

And so this is when Kong enterprise into production, we had looked at what we do the amount of data that we wanted to cache in order for the service to be healthy. We ended up sizing our backend Redis appropriately. We were getting about 70% hit rate on the cache and it brought down our average latency from 25 milliseconds to 4. And we were really impressed with that, having not implemented Kong and for a caching solution at all. So it was, again, really impressive for us.

And so looking back at a lot of our factors of our success, obviously Kong played a big part of that, but then we have the Zillow Group core values that we move fast and we think big in that ZG is a team sport and that we own it.

And so it was really great to see a lot of the different brands come together, embrace this idea, collaborate, build this solution with me. Along with that, it was very complimentary, a lot of the devops principles that we partnered with our customers, our development teams for success that we automate, automate, automate as much as we can, we make things self service, and we do things in a way that allows us to iterate quickly.

And then again, the power and flexibility of Kong just really opened up a lot of doors for us. And then Kong just being there caused people to really think differently about how we were doing things. And then lastly, there were a lot of features of AWS that we took advantage of in order to scale out to enterprise workloads. And so with AWS we ended up leveraging a lot of their best practices. So in terms of high availability, in each region they offer multiple availability zones, and these are basically separate data centers within a geographic region that give you redundant everything at every level. And it’s very important to leverage multiple AZs because if you’ve ever used AWS, some of those go down sometimes.

Then the ability to elastically scale and I think a lot of people when they think of scaling, it’s just upward and onward where it’s like I’m only ever going to be adding more. And I think one of the important tenants in the cloud is that if you really want to see that AWS savings at the end of the month, you also need to be able to scale down and it’s really important practice to implement. Otherwise, at the end of the month, it’s like, “Why am I spending a million dollars on this? I thought this was going to be cheaper?” It’s like, “No, you need to scale both ways” and it’s just like those sweatpants that you put on here. Here, I’ve been eating well at this conference all week, drinking free drinks. I’m going to expand those sweatpants out, but when I get home and start working out again, they need to still fit and not fall off my ass when I get home.

So scaling up and down, and also scaling horizontally and vertically. So when you’re looking at the database instances, the EC2 instances that you’re using, you want to be able to increase the size of those instances and then you also want to be able to add more instances to scale in both directions and then AWS has a lot of tools out there that help us with the automation process. Then security, even though it’s the last slide, or last item on the slide, it should never be the last thought. It should always be an integral part of anything that you do and also realizing that in AWS it’s a shared responsibility with you and AWS, that you should be leveraging a least privileged model that you only introduce permissions and access as they’re needed and then using security as code will allow you to make sure that your policies are enforced.

And so we were looking at our AWS resources for Kong. We went with PostgreSQL. Well, we just have in house experience with it. We’re very comfortable with it. We didn’t have any Kasandra before, and for the way that we wanted to manage and scale Kong, it was the right fit, and we went with Aurora because again, of the enterprise aspects. You look at RDS versus Aurora, you’re getting the multi-AZ clustered managed auto patched, automated backups, a lot of enterprise features that will help you withstand a disaster.

And then because we were using rate limiting and also the caching we wanted ElastiCache Redis back in, we used elastic cache for that, again, a managed service that scales out has and uses the write and replica technologies and then EC2 Auto Scaling as we added new nodes to the Kong cluster or there was say a hardware failure, in AWS we wanted to be able to replace those nodes or add new nodes as needed.

I think of one really cool thing is that’s often overlooked in AWS is the EC2 parameter store. It’s a great key value, secure string service that you can use to protect your data. And we actually use it for our database passwords, API keys, a lot of the sensitive information that we don’t want sitting out there in a repository or in our Terraform state files.

Again, another important pieces of plastic load balancing have that in multiple AZs to protect your services and scale out. Then CloudWatch, you need to be able to monitor and alert on the health of your services. We used, IAM for our instance profiles on the Kong nodes, so that they can then reach out and get access to various things like to set EC2 parameter store. So we’re not embedding keys in any of the nodes.

And then again, with security groups, least privileged model, everything that we did in AWS was really locked down to the specific things that needed access to it. So our load balancers can talk to a Kong node. Nothing else can talk to a Kong node, you can talk to the load balancer, Kong nodes can talk to the database, and everything is very secure and locked down.

And so this is what our architecture looked like and still looks like. So right in the middle, you’ll see a Kong cluster that has, again, the auto scaling group where the nodes in the cluster can scale out and scale back in as needed. We have an external load balancer to accept connections from the public internet and from other Kang clusters in different data centers. And we only expose that via SSL.

Internally, we have a load balancer that can do both HTTP and HTTPS depending on the service. And then for the Admin Gooey functionality and the enterprise edition, you can also access that via SSL and then the Admin API as well. And so the way we designed this was that we wanted to be completely transparent for the microservices. And so our consumers actually hit a Kong end point in their local VPC that adds the API key for them, forwards that to the remote Kong cluster, it validates the API key as that consumer and then forwards it onto the microservice. So that way, we in operations can seamlessly do key rotation without having to impact a service deploy or have developers reconfigure things.

So in terms of provisioning resources, infrastructure as code is the new buzzword, and it’s a great buzzword in terms of what it actually implies, and so what is it? It’s just machine readable configuration of your data center and this can be a script or a declarative definition of what you want your infrastructure to look like and how it should behave and the benefits to doing it this way is that you can then version it, you can put it in a repository and iterate on that and be able to revert back to a previous state as needed. It’s also shareable and that was very valuable to us at Zillow because we have multiple brands, multiple devops teams, and multiple people provisioning and it’s reusable, In that way, we didn’t have to reinvent the wheel at each step. We were always using the same code together, and then repeatable because even within a devops groups, we were deploying multiple Kong clusters and so we needed a way to ramp up quickly to meet the demands of our customers.

And so at Zillow, Terraform was our way of doing that. There are obviously other tools out there for AWS, but we had already standardized on Terraform and so that’s how we did Kong. And Terraform is just a tool that codifies a APIs into declarative configurations. And you can go to the website there are some of the additional benefits to Terraform is that it is open source just like Kong and has a great community behind it. Another benefit is that it can really manage your complex change sets. And so if you’re looking at introducing say a new resource or modifying an existing resource, Terraform can go look at what’s already existing in your VPC. Compare that to the changes you want to apply and only make those changes. It can also manage resource dependencies. So if you have a security group that depends on another security group, or if you have a EC2 instance that’s going to depend on a database, Terraform can help you easily manage those dependencies to make sure that resources are created, modified in a destroyed in the appropriate order.

So getting started with Terraform, you can just go down to their website, download it for free, it’s available for a number of different platforms, and you can use homebrew on the macOS to get started with that as well. You just unzip the binary, place it into a folder that’s in your path. And here’s some instructions to help you do that. Super easy to do. Then you just verify your installation. Here, I’m on my system called Awesome. I have another one called Booya, Terraform –version, and it will give you what you have currently installed. It’s actually important to take notice of this because Terraform is actually a project that iterates really quickly. They release new versions all the time and it’s important to stay up to date, you actually may find when you start collaborating with other people, if they’ve downloaded version after you and have something that’s more up to date, you’re going to have to upgrade in order to modify the state.

So getting started with Terraform, there are a number of different providers that you can use to basically describe provision change and destroy resources in your environment. And there was actually over 70 officially supported providers, AWS being one of those. It’s been brought up in other talks as well. There’s some community providers as well, and Kong being one of those. So a very flexible and powerful tool.

So Terraform configuration, that’s basically just text files. There is a Terraform format where it has a .tf extension, and this is actually the preferred way to describe your infrastructure because it’s human readable, you can add comments to it and the declarative format for that is HCL for Hashicorp’s Configuration Language. You can actually also do it with json with a tf.json extension. But this is really designed for applications that would be generating the Terraform for you in order to apply.But again, realizing the limitations of json, it’s not as human readable and you’re not going to have the comments.

So configuration semantics for Terraform is it basically looks at all the files in your directory, orders them alphabetically and merges them all together. And so if you say create a definition of a resource in multiple files, you’ll actually get an error on that merge because of the multiple declarations. There is a pattern for overriding. I’ll list that here for further reading on your part, but I’m not going to go into details, just so we can focus on Kong. So yes, Terraform is declarative, the order of a reference in the variables within the files don’t matter. It’s going to merge them all for you. So some basics, typically, you’ll lay out a directory with a main .tf, and this is where you specify your provider. This could be the AWS one where you’re going to give it some credentials, it could be the Kong one where you give it your Admin API token.

Then you define resources. And basically any file name that you want, typically would be it would be redis.tf or aurora.tf to describe the resource that you’re trying to provision. It’s just basically a component of infrastructure. And you can actually have multiple definitions within one file. A data source, typically stored in data.tf basically references an existing piece of infrastructure that wasn’t created within your Terraform directory. It may have been created by someone else in their Terraform directory, but instead of having to statically reference, say a resource ID like an ARN, you can use the data source to have it pull in that information for you, so that if you were to say rebuild a VPC and you get a new ARN, you’re not having to update those static references in each place.

And then variables are basically just parameters that you can specify. Think of any other programming language, even though HCL is not a programming language, it’s the same principle. And then lastly, modules and these are basically a Terraform directory of resources that are encapsulated into one group that you can be reusable.

And so we developed a module for provisioning our Kong clusters in AWS. And I’ve tried to make them pretty low barrier to entry in terms of its prerequisites. Obviously you’re deploying into AWS, you’re going to need a AWS account. We do everything in VPCs and so you’ll have to have your VPC setup. Then you’ll have public and private subnets, ones that are exposed to the Internet and one that are completely internal and this is actually a best practice for AWS so that you only expose things as needed. And we do labeling. We do a lot of tagging in our AWS accounts and this way, again, we can use data sources to reference those without having to say have a static reference to a subnet ID in there. And so for this module, you just have to label your private and a public subnets using the type tag.

And then a default VPC security group. This is going to be for giving you SSH access to the Kong nodes. So it allows you to choose how you want to do that, whether it’s, say a corporate subnet or a bastion host. And then you’re going to need, an SSH key pair for SSH into the Kong nodes, a SSL certificate for the HTTPS on load balancers. And this can actually be different for each one of the SSL endpoints. And then lastly, you just need Terraform, so hopefully very low barrier to get in. You need your AWS account setup and Terraform.

And so happy to announce that we’re releasing this open source project for everyone to share. It’s now Zillow Group/Kang-Terraform on github, and we’ve added it to the con hub that was released earlier today.

And so here’s an example of building your Kong cluster using our module. Pretty easy to do. And so this is the point where it’s like, “Wabam. You got your Kong cluster, I’m selling it on TV.” Like one of those, yeah, “It’s now yours for three easy payments.” And that last one is going to be super complex because don’t all have UCS majors feel robbed for never having using calculus throughout your entire career? So I made that last payment super complex so that you can apply some of that, feel like you get value for that CS degree. And so provisioning with Terraform easy as one, two, three, Terraform init, plan, and apply. And again, wabam, you’ve got your Kong cluster, and this is my Oprah moment. It’s like, “You get a cluster, you get a cluster, he gets a cluster.”

So while that’s actually applying, and setting up all the resources, you actually have to go and do some things. And so if you go into AWS console, into the parameter store, you’re going to want to add a password for your Kong cluster. And basically, you can set it to whatever you want. I don’t have that in any of the Terraform tf files because again, we don’t want that being committed to a repository and then being exposed to people that shouldn’t have it.

And the same thing with the license for EE or AM for the bin tray off. And this is only if you’re doing the EE edition and you can actually do CE and EE with this module. And so the Kong nodes are running minimal Ubuntu and I’m actually a super huge fan of it. Came from the Debian world long ago. But Ubuntu is really a modern operating system that’s built for the cloud. Minimal Ubuntu has a very small footprint. I think it’s under 100 megs. It’s very secure because again, we’re reducing the surface layer and scope of what’s being installed. It’s really fast booting. So I can provision an instance in 90 seconds from scratch. Again, it has an optimized kernel for AWS to give you even better performance.

The Kong service itself is installed for you. It’s supervised under a program called runit or optionally also called by it’s command line tool sv. And basically, this will manage the Kong process for you. So if somehow it crashes or fails, it will restart it for you. We’ve also added a Kong splunk plugin for logging. So a great segue from the previous talk where he’s talking about doing splunk logging, the splunk plugins now released today.

Then automatic local log file rotation. For our declarative management of the API endpoints, we started out with Kongfig, and this was actually before the Terraform provider for Kong was released. And so that also gets installed.

And so some of the cool hacks I saw were interesting when I created this module was how do I do ELB, elastic load balancing health checks? And this actually became the first Kong endpoint. And so you have a slash status on the Admin API. For CE, we didn’t expose our admin API on any of our load balancers. And so I’m like, “I’ll just create a slash status on the Kong gateway that points to the local host status. And so that way I can do health checks.”

Then enterprise version was just a modification of that because if, again, if you look at, can’t really do off when I’m just giving it an HTTP endpoint for our status. So with enterprise, our back is enabled by default and what I did was created to monitor user, with it a token that has access to slash status. Then we create the Kong slash status endpoint and modify it using the request transformer plugin to add the Kong admin token for the monitor user, which is just monitor.

And so once you’ve provisioned your cluster, there are some additional steps you want to take, by default, the root password is just KongChangeMeNow#1. And so you’ll want to log into one of your instances and change it. It’s not too much of a security threat because only the Kong instances themselves have access to that PostgreSQL database, but it’s definitely a good practice. You can then update that root password in the EC2 parameters store. So as you provision new Kong nodes, it has up-to-date a configuration. Also, you’ll want to enable IP white listing on that slash status endpoint so that way you’re not exposing it to the public on your external load balancer. And then for enterprise edition, the default admin user in our back is just zg-kong-2-1. And obviously that’s going to rev with each version, but you’ll then can log into the Admin Gooey and change that.

I’m not sure if the Kong Terraform provider can do our back yet, but Kongfig can’t, and so, we just do that manually through the Gooey. You’ll also then want to update that value in the EC2 parameters store because that admin token can be used for your declarative definitions of Kong endpoints. Some additional features about this plugin is that almost all the settings are tweakable. You can change your EC2 instant sizes, your timeouts, your thresholds, and also resources can be optionally provision. So if you’re not using Redis, you don’t have to enable Redis. Say you have an existing PostgreSQL database that you want to use, you don’t have to provision Aurora.

Then a big thing for us is CloudWatch. And so there are CloudWatch actions that you can define that will little trigger on the various thresholds that can be tweaked.

And so this allows you to send email or send a pager duty if a Kong node goes down, you’re hitting four or five x 100 thresholds. You can also add bastion host access to all the resources since everything is locked down. Say you want to manage the PostgreSQL database outside of that, you can add that to the bastion hosts cider blocks.

Some recommendations for running in production would be to use the C52XLs, this is actually Kong’s recommendation as well. And if you look at it, you’re going to see your host running maybe 3 to 5% CPU and like 800 megs of RAM of the 16 gigs I think it has. And you’re like, “Why am I doing this?” It’s for the networking. If you look at AWS instance sizes, you have to evaluate not only the CPU and memory that you need to provision for, but the network is very important.

If you look at those T2 instances, you’re going to get burstable network speeds that are going to be sporadic and they’re not very a kind for production workloads. And so this, the C52XL gets you into that 10 Gig range, which will scale for a production. Obviously you’re going to want to look at those CloudWatch blogs and then be able to implement auto scaling policies about when you shrink down and when you go big for your production workloads. And then it also allows you to add additional tags to all of the AWS resources. When they get provisioned, they’re going to have a service and description tag and basically when you provision the services, the name of it, it’s going to be zg-kong-2-1- whatever environment you define. So dev, staging, prod. But then you can also go in and add additional tags in the module itself, which will be passed to every single resource that gets created. And this will help you with your auditing and billing.

Then highly recommend you send those Kong logs to a remote endpoint. Mentioned splunk and that’s what we use. And it really enables you and developers to have visibility into the health of the cluster and the health of the application. And so our teams heavily rely on this to look at and monitor and alert on the health of their application. We use it to do the same for the clusters. And one really cool thing is when we release the splunk plugin at Zillow, everyone immediately went in and started looking at like latencies and they were just blown away. And I’m really impressed with how performance Kong really is a to see an average latency is zero milliseconds on processing and a really low p99, I think it was like 4, it was impressive and it just boosted everyone’s confidence in Kong

Then lastly, we talked about API key management, and because we’re again putting this in declarative form to where you don’t know what that key is. We actually use pwjen, it’s a little command line utility for Linux and for Mac, to generate secure passwords so that, that’s what the -s does, is it makes it more secure. You can then specify the length. And so for API keys we use 32. Then the 1 just says give me one password. This way, as you go into the declarative world for your API endpoints, you can have when they give you, say a pull request to create a new end point and there’s an API key that’s supposed to be in there. They can actually just provide you a token, you generate a value for that token and using pwjen, and then you can store it in something like vault or EC2 parameters store and have that automatically inserted into a render config as so applied.

Some thoughts on EPI management, we’re using Kongfig currently. Again, it was because the Terraform provider didn’t exist at the time, but it does have some limitations, that doesn’t support services and routes and the newer versions of Kong. And the development of it has really slowed down. And so we’ve been looking at some alternatives. Terraform I think is going to be the go to in the future because then also puts it into the same declarative language that we use to revision Kong itself. And then there’s also ansible.

Then you can set up policies to make this completely self service so that your customers can then send you a pull request for the end point that they want to create or modify, and then as you merge to master, automatically apply it to the appropriate cluster. So some ideas that I’ve had for the future, again, this is going to increase the the entry into using our Kong cluster, but we’re looking at maybe using a custom AMI, using HashiCorp’s Packer, and this would basically allow us to configure Kong and a lot of the static content before it even boots. And that way we’re only applying database passwords and some of the dynamic stuff configuration as it comes up.

Stats d integrations so that we can get more metrics on the Kong nodes themselves, an additional CloudWatch triggers so that we can alarm on CPU and memory and disk. Then also provide some example auto scaling policies. We’re still in the process of setting that up ourselves. Right now we, we provision however many Kong nodes we think that we need for that given environment and we just keep it statically there. That auto scaling policies replace nodes as needed as say AWS changes hardware underneath us, but we’ve really haven’t had a need to really scale that out yet because Kong is so performant that just haven’t had the need to scale beyond the minimum amount that we feel we need for reliability.

Then we’re also evaluating the PerimeterX plugin, PerimeterX is an enhanced bot protection framework. We use it on our primary website and then to be able to offer that within our Kong service as well would be a great compliment to it. And then instead of having all of our different con clusters talk to each other over the internet, we’re thinking about setting up, say a transit Kong VPC where there’s only Kong in it and we don’t mind peering with all of our different brands because at some level we do trust them and that way we’re getting better performance at the VPC level.

And then with the exciting announcement of 1.0, definitely looking into the service mesh a opportunities there, but even before the Sidecar proxied was released, I was thinking, what if we just got rid of all of our load balancers in front of our service and just had Kong? Because then you would have all of those great features available to you and really basically have a service mesh within your data center without going through refactoring all these things are adding sidecar capabilities even though it’s out in the network and not in the sidecar itself. All those features still exist and are available to you immediately before, say, migrating to Coobernetti’s or changing out your entire stack.

So big thanks to some people at Zillow Group. Toby Roberts, the VP of Operations. Leif Halverson, the Director of Infrastructure who’s my supervisor, were great supporters of our transition to Kong Enterprise and had no problems justifying that expense. Jan Grant, who’s our PM for the project and kept me on a task with all of those Jira tickets, and then my own teams. And, this isn’t just production operations at Zillow because I’m in Seattle, but also the other brands here, I’ve got some guys sitting up here in the front row who very prominent in the implementation of all of this.

Not all the product teams we partner with. It was exciting to do this with a bunch of teams that were eager to onboard and it was just a really fun process. And then all the people at Kong headquarters, you really struggled to find in this industry people that are just so bright, pleasant and fun to work with. Danny, Aaron, Harry on the cloud team like Ben Helves also, our customer success engineer, Travis, I could go on and then all of you Kongers because I think Kong is such an awesome technology, but then to back it up with such a vibrant and cool community, my hat’s off to all you guys.

Thank you.

The post Kong with Terraform: A Field of Dreams appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live