欢乐海岸有什么好玩的| 薛定谔的猫比喻什么| 黄埔军校现在是什么学校| 早上起来口苦吃什么药| 1963年属什么| 芒果什么品种最好吃| 6月20日是什么节日| cold是什么意思| 榴莲是什么季节的水果| 肝掌是什么原因引起的| nu11是什么意思| 骨癌有什么症状有哪些| 一什么人家| 荣辱与共是什么生肖| 罗汉果可以和什么一起泡水喝| 恩泽是什么意思| 现在什么时辰| 林彪什么时候死的| 拉肚子吃什么药效果好| 临床医学是什么意思| 喉咙一直有痰是什么原因| 脖子肿了是什么原因| 鸡蛋和什么不能一起吃| 红斑狼疮是什么原因引起的| 为什么喝水血糖也会高| 三十而立四十不惑什么意思| 顺手牵羊是什么生肖| 妈祖叫什么名字| 睾丸痒用什么药膏最好| 什么水果不上火| 手指上的月牙代表什么| 一个牙一个合是什么字| 脱脂乳粉是什么| 不出汗的人是什么原因| 豆面是什么| 脸上皮肤痒是什么原因| 出淤泥而不染是什么意思| 姑姑的老公叫什么| 广谱是什么意思| 日本豆腐是什么做的| 腰肌劳损是什么原因造成的| 香榧是什么东西| 女人生气容易得什么病| 吃什么补| 阈值是什么意思| 硫酸镁注射有什么作用| 手上蜕皮是什么原因| 什么人不洗澡| 肌红蛋白高说明什么| 平光眼镜是什么意思| 缺镁吃什么药| 出现血尿是什么原因| 新生儿喝什么奶粉好| 抹茶是什么| 腰椎退行性改变什么意思| 可乐必妥是什么药| 为什么突然得了荨麻疹| 尿隐血阳性是什么意思| 多糖是什么意思| 胃药吃多了有什么副作用| 机场地勤是干什么的| 冬字五行属什么| 7.6什么星座| 虚岁24岁属什么生肖| 一个斤一个页念什么| 贫血补什么| 冰丝纤维是什么面料| #NAME?| 糖类抗原高是什么意思| 漫威是什么意思| 龙蛇混杂是什么意思| 牛杂是什么| 白细胞中性粒细胞高是什么原因| 瘿瘤是什么意思| 尿黄尿味大难闻是什么原因| 为什么会长虱子| 气血不足吃什么药最好| 后脖子出汗多是什么原因| 心梗吃什么药效果好| 甲醛什么气味| 月经一个月来两次是什么原因| 气血虚吃什么药| 什么东西越洗越脏答案| 咳嗽有白痰一直不好是什么原因| 新西兰用什么货币| 化工厂是干什么的| 毒是什么意思| ta代表什么| sly是什么牌子| ochirly是什么牌子| 胎儿左心室灶状强回声是什么意思| 男人割了皮包什么样子| 龋齿是什么样子的图片| 买车选什么品牌| 什么叫生理需求| 2倍是什么意思| 11.7号是什么星座| 潆是什么意思| 老是口腔溃疡是什么原因| 江湖是什么| 边缘心电图是什么意思| 眼睛浮肿是什么原因引起的| 外阴瘙痒什么原因引起| 福利姬什么意思| 力什么神什么| 覆盆子是什么东西| 什么日什么里| 雷诺综合征是什么病| 有什么黄色网站| 心律不齐房颤吃什么药| 羯羊是什么羊| 近视散光是什么意思| 雾化治疗的作用是什么| 上不下要读什么| h2ra 是什么药物| 外溢是什么意思| guess是什么意思| 特步属于什么档次| 两榜进士是什么意思| 孕中期失眠是什么原因| 智商什么意思| 易出汗是什么原因| 一夜白头是什么原因| 不造血是什么病| 右侧肋骨下面是什么器官| 停月经有什么症状| 移动电源和充电宝有什么区别| 钢琴是什么乐器种类| 帛字五行属什么| 什么的芦苇| 龙男和什么生肖最配| 取环后要注意什么事项| 温州什么最出名| 1957年属什么生肖| 6岁属什么生肖| 女人矜持是什么意思| 3月29号是什么星座| 平衡液又叫什么名字| 什么样的人容易中风| 吃什么最补血而且最快| 小蓝是什么| 生完孩子可以吃什么水果| 现在做什么最赚钱| rsa胎位是什么意思| 全身酸痛吃什么药好| 什么药可以帮助睡眠| 居住证有什么用| 寿元是什么意思| 卫生院院长是什么级别| 金牛座什么性格| 什么是断桥铝| 晞字五行属什么| 轻如鸿毛是什么意思| 男人第一次什么 感觉| 胎监不过关是什么原因| 梦见自己吐血是什么征兆| 阴历六月十三是什么日子| 葫芦什么时候成熟| 滑膜炎是什么病| 黄油可以做什么美食| 梦见死去的朋友是什么意思| 吃驼奶粉有什么好处| 伤寒现在叫什么病| inshop女装中文叫什么| 生津是什么意思| 广东话扑街是什么意思| 榴莲壳有什么用处| 包皮炎挂什么科| 补铁吃什么| 铁蛋白低吃什么可以补| 静夜思是什么季节| 第一次坐飞机注意什么| 鸡蛋和面粉可以做什么好吃的| 觉悟是什么意思| 为什么做梦会说梦话| 咳嗽有痰挂什么科| 茧子是什么意思| 绝对值是什么| bb是什么意思| 拉绿粑粑是什么原因| 脚水泡痒用什么药| 不还信用卡有什么后果| 处女座和什么座最配| 暂停服务是什么意思| 田五行属性是什么| 大便干燥用什么药| 梦见自己的手镯断了什么意思| 黑枸杞泡水喝有什么作用和功效| 舌头胖大是什么原因| 化胡为佛是什么意思| 小孩手麻是什么原因| 尿道炎是什么原因引起的| 医院体检挂什么科| 五月十一是什么星座| 南京立秋吃什么| 葡萄籽什么牌子效果好| 小孩子流鼻血是什么原因引起的| 81年的鸡是什么命| 除湿气用什么药| 安全期是什么| 什么西瓜最好吃| 天蝎座是什么象| 第二名叫什么| 舍什么救什么| 灵芝孢子粉是什么| 拉肚子吃什么药效果好| 皮肤瘙痒是什么病的前兆| 本子什么意思| 身上长很多痣是什么原因| 性功能障碍吃什么药| 脾胃湿热吃什么中成药| 阴茎硬度不够吃什么好| 莺是什么鸟| 问号像什么| 40gp是什么意思| 政协是干什么的| 不安腿是什么症状| 上海是什么省| 心绞痛是什么症状| 梦见小孩生病什么预兆| 什么是蚕豆病| 竹荪是什么东西| 山茱萸有什么功效| 花椒泡脚有什么功效| 冰藤席是什么材质| 癫痫吃什么药| 青鹏软膏主要治疗什么| 老生常谈是什么意思| 戴笠什么军衔| 痔疮是什么科室看的| 吃什么补白细胞最快| 人为什么会长痣| 生蚝和什么不能一起吃| 脑梗输液用什么药| 瘦脱相是什么意思| 包粽子的叶子叫什么| 屁股长痘痘是什么原因| 不能人道什么意思| 十八岁属什么生肖| 腰间盘突出用什么药好| 北京的市花是什么| 脖子发黑是什么原因| 口痰多是什么原因| 嘴发苦是什么原因| 白天为什么能看到月亮| anti是什么意思| 肾疼是什么症状| 土地出让金是什么意思| 肾阴虚火旺有什么症状| 小孩眨眼睛是什么原因| 单核细胞百分比偏高说明什么| 国药准字是什么意思| 离苦得乐什么意思| 痛心疾首的疾什么意思| 肋骨里面是什么器官| 36d是什么意思| 上皮内瘤变是什么意思| 什么叫放疗治疗| 吃鱼肝油有什么好处| 为什么会眼压高| 误区是什么意思| 农历闰月有什么规律| 汤姆是什么品种的猫| 影像科是做什么的| 百度
BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

“2017中国(上海)社会治理十大创新实践案例”发布在即

44:42

Summary

百度 三是宗旨意识更加牢固。

Anna Berenberg talks about One Network, Google's unified service networking overlay, which centralizes policy, simplifies operations across diverse environments, and enhances developer velocity. Learn about its open-source foundation, global traffic management, and vision for future multi-cloud and mobile integration.

Bio

Anna Berenberg is an Engineering Fellow and Uber Tech Lead for Foundation Services, One Network and Cloud Load Balancing at Google. She has spent the last 18 years at Google, including 9 years in Google Cloud. She co-authored “Deployment Archetypes for Cloud Applications” article published in ACM Computing Surveys.

About the conference

Software is changing the world. QCon San Francisco empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Berenberg: One Network sounds very unique in an area of time when everybody is building more things, and we were not an exception. As Google, we were latecomers to the cloud computing game. With that, our team started building, and within a few years, by 2020 we've organically grown to more than 300 products with a lot of infrastructure, with multiple network paths. Our customer noticed that the products were not integrated, and we noticed that our developer velocity is actually low because every time we need to release a new feature, we need to release it in n times on every network path on a different infrastructure.

The most scary part and important part was the policy. Policy is something that cloud providers worried about day and night because all the cloud infrastructure is controlled by policy. How do you make sure that policies are being controlled on every path without any exception, without needing to modify these 300 products?

Why Is Networking Complicated?

Let's look at why networking is complicated. On the left, you see Google Prod Network. Google has actually its own network as you know, and they're running Search, YouTube. On that we'll build some cloud products, for example here, Borg is a container orchestration system, on it we built Cloud Run. On the top of Cloud Run there is its own networking. Then there's virtual networking on GCP itself which is called Andromeda. On top of it we build different runtimes like a Kubernetes GKE and GCE which is a compute engine with the VMs.

Over there there's, again, networking its own, and GKE, as you know Kubernetes has its own layer of networking. On the top of it there are service meshes. Then the same thing applies on multi-cloud or the customer premises or distributed cloud, where layers happen. Then, what happened? In each environment we build applications, and these applications, again, they run on different runtimes, they have different properties. This combination of infrastructure on different network paths and different runtimes created an n-squared problem of what I usually call Swiss cheese: something works here, something doesn't work there. What's the solution? The solution is what we called One Network. It's a unified service networking overlay.

One Network (Overview)

What is the goal of One Network? We say we want to define policy uniformly across services within the constraints of what I just explained, that it had compute heterogeneous networking, different language runtimes, coexistence of monolith services and microservices. It's all across different environments, including multi-cloud, other clouds, public and private clouds. The solution is One Network, which is like frustration. You can think about, why do I need so many networks? Can I have one network? One Network it is. Policy managed at network level. We iterate towards One Network because it's such a huge architectural change to manage cost and risks. It's very much open source focused, so all the innovation went into open source and some went to Google Cloud itself.

How do you explain One Network? We'll build on one proxy. Before that, every team and basically a lot of proxy war floating around. One control plane to manage these proxies. One load balancer that is wrapped around the proxy to manage all runtimes, so both GKE, GCE, Borg, managed services, multi-cloud. The Universal Data plane APIs to extend the ecosystem, so we can extend with both first-party and third-party services. Uniform policies.

Again, it's across all environments. When I presented this particular slide in 2020, everybody said it just sounds too good to be true. It was at that time. Who is going to benefit from One Network? Everybody actually. These are the roles that we put together who benefit from One Network. They vary from people who care about security policy, to DevOps and networking, care about network policy, to SREs who care about provisioning large number of microservices or services. To application developers who actually want to worry about their own policy without needing to interact with the platform admins or platform engineering folks. There's the width, the depth, and the isolation at the same time, so it's a partition. As well as universal. Everybody cares about orchestration of large environments and everybody cares about observability.

One Network Principles - How?

What are the principles we built One Network on? We build on five principles. We're going to build on common foundation. Everything as a service. We will unify all paths and support all environments. Then we create this open ecosystem of what we call service extensions, which are basically pluggable policies. We then apply and enforce these policies on all paths uniformly.

1. Common Foundation

Let's start with the first one. This is One Network pyramid I put together because I was thinking about how to explain the narrowing scope of any layers. We start with the common purpose Envoy proxy, and we'll talk more about it. It's an open-source proxy available both on GCP or on-prem anywhere. Then we wrap it around and build around it GCP-based load balancers which actually work both for the VM, containers, serverless. You can build on top of it GKE controller, and now you have GKE gateway that uses the same underlying infrastructure but now only serves GKE services and workloads, and it understands behavior of GKE deployments.

The top of the pyramid is where you actually don't see gateway at all because it's fully integrated into Vertex AI, which is our AI platform, for example. It's just implementation detail. All of that is using the same infrastructure across the products and across the path. All of these layers are controlled by a single control plane which we call Traffic Director. It has a formal API and everything. When I say single, it actually doesn't mean a single deployment, it's the same control plane that could be run regionally, or globally, or could be specialized per product if there is a need for isolation. It's the same binary that runs all over, and so you can control it and orchestrate it the same way.

This is One Network architecture, Northstar. I want to walk you a little bit from the left to the right, you can see different environments. It starts from mobile going to the edge, then to the cloud data center, then on to the multi-cloud or on-prem. There are three common building blocks, there is a control plane, Traffic Director, that controls all of these deployments. There is open-source APIs between the Traffic Director and the data planes that call xDS APIs. Then there are data planes. Data planes are all open source, they're open source based. They're either Envoy or they're gRPC, both of which are open-source projects. Having this open-source data plane allows us to extend both to multi-cloud, on to mobile, and basically anywhere outside of GCP, because it's no longer proprietary. Talking a little bit about Envoy proxy, it came out in 2016, and we really like it.

The reason we like it is because it was a great, new, modern proxy with the all advanced routing, observability as first class. It got immediate adoption. Google heavily invested in it. The reason we like it, it's not because it's a great proxy but because it's a platform with these amazing APIs. It has three sets of APIs. It has configuration APIs between the control plane and the data plane in the proxy itself, that configures its eventually consistent APIs. They provide both management plane and control plane functionality. There are data plane generic APIs. There is an external AuthZ that does allow and deny, and so you can easily plug in any AuthZ related systems. There is an API that's called external proc, so basically you can plug in anything and it can modify behind it, you can modify the body, and then return it back. It's very powerful. Then there is WebAssembly binary APIs for the proxy Wasm.

There are also specific APIs. Envoy had right away, RLS, which is rate limiting service, which is interesting to see that it could have been achieved via external AuthZ, which is more generic because it's also allow and deny, but yet over here it was specialized. We're seeing that in the future we're going to have more of this specialized API. For example, we're thinking of LLM firewall API when you think that incoming traffic, you can classify it as AI traffic, you can apply rules that are specific to AI traffic. You can do safety check. You can do block lists. You can do DLP. The data plane itself also has filters.

The Envoy proxy has filters, both L4 which is TCP, and L7 HTTP filters. There are two types of them, one of them is linked to Envoy and that changes ownership. If as a cloud provider we link, then it means it could be only our filters, and if a customer runs it on its own, then it's theirs. We cannot mix and match. WebAssembly filters, it's a runtime that you can actually have both first-party and third-party code loaded into the data plane. Google heavily invests into open-source proxy Wasm project, and actually we released a product. These filters could be changed, they could be either request based, response based, or request-response depending on how you need to process it. All of that is configured by Traffic Director.

Talking about Traffic Director, it's xDS Server. What it does, it combines two things. It combines very quickly changing dynamic configuration, for example, the weights and the health with static configuration as how you provision a particular networking equipment. That magic we put behind Traffic Director, we call it GSLB, the Google Global Service Load Balancer. It's a global optimized control plane. It's the same algorithm that Google uses to send traffic for Search, YouTube, Gmail, anything you do with Google uses this load balancer behind it.

Global optimizes RTTs and the capacity of the backend. It finds the best path and the best weight to send to. It also has a centralized health checking so you don't need to do n-squared health checking from the data plane, because one time we actually noticed that if you do n-squared health checking you're going to end up with 80% of throughput through the data center being just health check only, leaving only 20% for actual traffic. Removing that 80% overhead is great. Also, it's integrated with autoscaling so when traffic burst occurs you don't scale up step by step, you can just scale up in a single step because you know how much traffic is coming, and in the meantime, traffic is being redirected to the closest. Traffic Director handles here policy orchestration, because when administrator creates policy it delivers it to Traffic Director, and then the Traffic Director provisions all the data plane with this policy where they enforced.

2. Everything as a Service

The second principle is everything as a service. This is actually a diagram of a real service. It's cloud service internal. Think about how do we manage such a service? There are different colors, they mean something. There are different boxes, they mean something. The lines are all over. How do you reason about such an application? How do you apply governance? How do you orchestrate policy across this? How do you manage these small independent services or how do you group them into different groups? One Network helps here.

Every of these microservices is imagined as a service endpoint, and it enables this policy to orchestrate and group these service endpoints and orchestrate over them without actually touching services themselves, so everything is done on a network. There are three types of service endpoints. There's a classic service endpoint of customer workload, where you take a load balancer, you put it in front, you got a service endpoint. You hide, for example, shopping cart service in two different regions behind it. That's typically how service is being materialized.

The second is a newer one where there is a relationship between a producer and consumer. For example, you have a SaaS provider who builds a SaaS on GCP and then expose it to the consumer via a single service endpoint that is materialized via this product that Google Cloud has called Private Service Connect. There's a separation of ownership, and the producer doesn't have to expose their architecture out to the consumer. Consumer doesn't know about all this stuff, that is producer running. The only thing they see is a service that is endpoint, and they can operate it. In this case you can think about the producer being a third party that is outside of your company or even your shared service.

If you have a shared service within your company and you want multiple teams to use it, this is a type of architecture you want to do because you want to separate your own implementation from consumption, and then allow every customer or consumer to put their policy on a service endpoint. You can expose a service endpoint through a consumer or expose a single service or as many as you want. There's also headless services. These are typically defined by service meshes where they're within a single trust domain.

In this case, services are just materialized as abstractions because there is no gateway here, there's no load balancer. Each of them is just a bunch of IP ports of the backends. An example of this is AI obviously. We're looking at a model as a service endpoint. Over here the producers are model creators, and the consumers are GenAI application developers. The producer, for example, the inference stack is hidden behind a Private Service Connect, so nobody even knows what it's doing there. Then a different application connects to this particular service endpoint.

3. Unify All Paths and Support All Environments

The third one is to unify paths and environment. Why would we want to do that? It's to apply uniform policies across services. To unify paths, we first have to identify paths. You can see here eight paths that we identified. This is a generalization actually, there are lots more, but we generalized them to eight.

Then, for each of them we identify a network infrastructure that you have to implement the path to apply the policy. You can see there is both external load balancer for internet traffic, internal load balancer for internal traffic, service meshes, egress proxy, even mobile. Let's start looking at one at a time. GKE gateway and load balancer, typically that's how services are materialized. What we did is we involved Envoy which was original deployment to become a managed load balancer, and we spent more than a year of hardening it in open source to become available for internet traffic. We also have global and regional deployment.

Then global deployments are used for customers who have global audience or who actually care for cross-regional capacity reuse or in general they need to move around, versus regional deployments for customers who care about data residency, especially data in transit or looking at the regionalization as the isolation and reliability deployment. We provide both. It's all connected to all runtimes.

The second deployment here is a service mesh. Istio is probably now the most used service mesh. The most interesting part about them is that they very clearly define what is service-to-service communication needs. It needs service discovery. It needs traffic management. It needs security. It needs observability. Once you separate this into these independent areas, it's easy to plug each of them independently. For Google product, we have cloud service mesh, which is Istio based, but it's backed by Traffic Director and also by gateway APIs as well as Istio APIs. It works for VMs, container, serverless. That is out.

Then, Google had service mesh more than 20 years ago, since forever, before service meshes were a thing. The difference between Google service mesh and Istio or any other service meshes, that we had it proxyless. We had a proprietary protocol called Stubby. We had a control plane that goes to Stubby. We provision Stubby with the configuration and everything. It basically was a service mesh the same way as you see it now. We expose this proxyless service mesh notion to our customers and to open source, where gRPC used the same APIs to the control plane, as Envoy.

Obviously, that allows us to reduce consumption because you have extra proxy. It's basically super flat without having any maintenance, because you don't need to install a proxy, there is no lifecycle management, there is no overhead here. Similar but a little bit different deployment architecture is GKE data plane v2, which is based on Cilium and eBPF in a kernel. It simplifies GKE deployment networking. Scalability is improved because there is no sidecar. Security is always on, and built-in observability. For L7 features, it automatically redirects it to the L7 load balancer in the middle.

Mobile, actually we didn't productize. This is a concept but we ran it for a couple years, and it's very interesting. This extends One Network all the way to mobile, and it brings very interesting behaviors, because first, as opposed to the workloads or computer centric, cannot actually have persistent connection to the control plane due to power consumption. The handshake is a little bit different. Also, that would require Traffic Director to build a cache to be able to support. We tried on 100 million devices, or a simulation of devices, and so that actually worked very nicely. It uses Envoy Mobile. This is evolution of Envoy proxy that is typically used, to Envoy Mobile, which is a library that is being linked into the mobile application.

One of the interesting use cases here, is, if you have 100 million devices and one of them goes rogue or you need to figure out, having a control plane would allow you to identify that particular one, deliver a configuration to that particular one, and get the observability or whatever you need, or shut down that particular one. The value is there. The second project that is also a work in progress is control plane federation. Now you can think about multi-cloud or the on-premises where a customer outside of GCP is running pretty much similar deployment but it's not GCP managed. Now you're running your own Envoy or you're running your own gRPC proxyless mesh with the local Istio control plane. In this architecture, we are using local Istio control plane to do this dynamic configuration, to make sure that the health of the backends is being propagated, so if the connection between Traffic Director and the local xDS control plane breaks, then the deployment on-prem or the cloud can function just fine, until they reconnect.

Then, you still have a single pane of glass and you can manage multiple numbers of this on-prem or multi-cloud deployments or point of sale deployments. You can imagine those can go into the thousands from a single place. Bringing it all together, this is what it looks like. We already looked at this picture. You have all the load balancers and the mesh, and they go across environment including mobile and multi-cloud.

4. Service Extension Open Ecosystem

That was the backbone. How do we use the backbone to enable policy-driven architecture? We introduced the notion of service extension. For each API that we discussed before, whether it's external AuthZ to do allow and deny, whether it's external processor, at every point, there is a possibility of plugging these policies in. The examples here are, for example, a customer wants to have its own AuthZ. They don't like AuthZ that we provide to them. They will be able to plug in. Or, another example is Apigee, which is our product for API management. Having service extensions changes paradigm of how API management is actually done, because previously you would need a dedicated API gateway to do API management.

Over here, API management becomes ambient. It's going to be available because One Network is so big, and you can plug at any point. The same API management will be available at any point, whether it's on edge, service-to-service communication, on egress, on mesh. You have this change from a point solution to the ambient presence of the policies or the value services. Another example here is a third-party WAF. We have our own WAF, but our competitors can bring their own WAF. It's open ecosystem. The customer may be able to pick and choose which WAF to use on the same infrastructure. They don't have to plug additional things in and then try to tie it together. It's all available.

The One Network architecture is all there. Before, we discussed how it looks at one point, and now you can see how you can plug this at any point everywhere, whether it's routing policies, whether it's security services, whether it's API management or traffic services. How does it work, actually? We have three types of service extensions. One of them is service plugins, Wasm-based. It just went public preview. Then there is Callout. That is serverless, essentially. You give the code, and we run it for you.

Typically, people like to have it at the edge, where you can immediately do header manipulation and other small functions. Then you have Service Callouts, which are essentially SaaS that have been plugged in. Over here, there's no restriction on size, ownership. It's just a callout to a service. Then, what we call PDP proxy. It is an architecture that allows plugging multiple policies behind a proxy for the caching. Not just for the caching, and then you can do manipulation of this policy and that policy, then do this. It's like operating on multiple policies.

Each of these policies are represented as a service tool, and they're managed by AppHub, which is our services directory. Going into the future, we're looking into managing all of these through the marketplace and have lifecycle management, because there's going to be a lot of them. The question is going to be how one is going to pick this WAF versus that WAF. You need to have a recommendation system. You need to worry about which one to choose.

5. Apply and Enforce Uniform Policies

The last, but not the least, is how do you apply and enforce uniform services? We actually came up with a notion of four types of orchestration policies. One of them is creation of segmentation. If you have a flat network and you want to segment it, you can segment it by exposing one service and not publishing others. That causes traffic to go through a chokepoint, and over there, you can apply policies. It's easy to do because everything is a service, so now it's easy to manipulate which services are visible, which services are not visible.

The second one is to apply a policy to all paths. What actually happened is that every application has multiple paths leading to it. For example, it's a very common deployment to have internet traffic and internal traffic going to the same service, to the same endpoint. When something happens, how do you quickly apply policy to all paths? Because you're protecting this application behind it, the workload behind it, versus worrying about, did I cover all paths or not? You should give away that knowledge to the programmatic system that can go and orchestrate over the paths.

The third one is to apply policy across an application. One application defines a perimeter, a boundary in which all these services and workloads reside. A typical business boundary, for example, an e-commerce application contains a frontend, a middle-tier, a database. Then a different application contains catalog and other things. Then, one application can call into the service of another application. Within a given application, a security administrator can say, on the boundary, there are these policies.

For example, nobody but this service can talk to the internet. Everything inside of the application cannot talk to the internet. That means the policy needs to be applied to each of the workloads, for example, not to have public IPs or not to allow egress traffic. The fourth one is to deliver policy at the service level. That is management at scale. Imagine that you need to provision every VM with a firewall or some configuration. With that, you have a thousand of them. Instead, you can group this VM into a single service, set up a policy on the service, and then orchestrate it on each individual backend.

This is how the policy enforcement and policies administration is done through this concept of policy administration point, policy decision point, and policy enforcement point. We spoke about One Network data planes that are policy enforcement point. The policy providers provide service extension and One Network administration points. Basically, what it does, it allows customers to provide policy of larger granularity of this group, for example, of application or workloads, to allow for the orchestration. Let's take a couple examples. How does it work? There's the notion of service drain in Google.

Imagine it's 3:00 at night and you got paged. What are you going to do? You drain first and think second. That is how Google SREs operate. What does drain mean? You administratively move traffic away from where you got paged or from that service. What are you going to drain? You can drain a microservice. You can drain in a zone or in a region. The microservice could be VMs, could be containers, could be serverless, a set of microservices. The traffic just moves away administratively. Nothing gets brought down. It's still up, it just no longer receives service. Your mitigation is done because, actually, traffic moves. Hopefully, the other regions are not affected. You're free to debug whatever happened with the running deployment. Then once you've found a problem, you undrain, slowly trickle traffic back, more and more, and get back to a working setup, and wait until the next 3 a.m. page happens.

How does it work with One Network? You can see traffic incoming through different data planes, through application load balancer, through service mesh, with gRPC proxyless mesh, or with the Envoy mesh. They're all going for region 1 now. Then we apply the drain via xDS API, and traffic moved across all of them at the same time. Over here, we just showed you fully moved, but you can imagine moving 10% of traffic, moving 20%, however many percentages of traffic you need to move. Or you can drain right away.

Another example is CI/CD Canary releases, where we want to direct traffic to a new version. You can see here there are different clients that are actually humans that go through the application load balancer through some website. There's a call center that is going through internal load balancer, point of sale going through the Envoy sidecar service mesh, and even multi-cloud on-prem going, for example, from the proxyless mesh. There are two versions of wallet service. There's v1 and v2. We provision at the top, and it delivers configuration, and off we go. The traffic moved to the v2.

One Network of Tomorrow

One Network of tomorrow, where we are today, bringing it all together. It's the same picture. We're basically done with the central part. Multi-cloud: we have connectivity, and we extended it to the multi-cloud. We are working on the federation. We have edge, obviously. We are working on mobile. It is a multi-year investment. The Envoy's One Proxy project started in 2017. One Network started in 2020. Our senior level executives committed to a long-term vision. The effort spans more than 12 teams. We have, so far, delivered 125 individual projects. Majority of Google Cloud network infrastructure supports One Network. Because it's all open-source based, then open-source systems are available to be plugged in there.

Is One Network Right for You?

Is One Network right for you? The most important thing to consider is, do you have executive support or not to do something like that? I wouldn't recommend anybody to do it on their own. Otherwise, the organizational goals need to be also considered. Is the policy something that your company worries about a lot? Is the compliance something that is super important? Multi-cloud strategy, developer efficiency that is related to the infrastructure. That is something important to consider when embarking on such a huge project. Plan for long-term vision, but execute on short-term wins.

That basically turned out to be a success story because we went without this big outcome. We were just doing one project at a time and improving networking one project at a time, closing bad holes in a Swiss cheese. We didn't talk much about generative AI here. That's why we decided to ask Gemini to write a poem for One Network and draw the image. Here it is. It's actually a pretty nice poem. I like it. Feel free to read it.

Questions and Answers

Participant: With all these sidecar containers, Envoy running on all these services, what kind of latency is introduced as a result of adding all these little hops?

Berenberg: They are under a microsecond, when they're local. It's how network operates. We didn't introduce anything new.

Participant: Today there are firewalls, load balancers, but you're also now adding an additional layer of proxy beside each service, which doesn't exist today.

Berenberg: No, we didn't. What we did, we normalized all the load balancers to be Envoy-based. We normalized all the service meshes to be Envoy-based plus gRPC-based. Whatever people were running, we continued to run. We just normalized what kind of network equipment we are running.

Participant: For organizations that don't already have or use service mesh, introducing this where before I can communicate with service A to service B, is just service A to service B. Now it's service A, proxy, proxy, service B.

Berenberg: That's correct. That's considering Envoy proxy, as a sidecar it introduces latency. That's why proxy was gRPC. It doesn't introduce any latency because it only has a sidecar channel to the control plane. You don't have to change context to anything. It's under 1 microsecond, I believe so.

 

See more presentations with transcripts

 

BT
轰趴是什么意思 北戴河是什么海 暗疾是什么意思 黄体期是什么时候 牙龈肿痛用什么药好得快
香蕉有什么功效和作用 空腹血糖高吃什么药 抑郁症有什么表现 女人腰疼是什么妇科病 萎缩性胃炎吃什么中成药
什么食物可以化解结石 化妆水是什么 指南针什么时候发明的 神经性皮炎用什么药好 泡蛇酒用什么药材最好
以逸待劳是什么意思 孩子注意力不集中是什么原因 梨状肌综合征挂什么科 煲蛇汤放什么材料好 fans是什么意思
哮天犬是什么狗hcv8jop6ns6r.cn 同型半胱氨酸是什么cl108k.com 狍子是什么动物hcv9jop4ns0r.cn 吃什么皮肤变白hcv8jop2ns4r.cn 盛情难却是什么意思sanhestory.com
血栓挂什么科hcv9jop6ns9r.cn 618是什么日子hcv7jop7ns4r.cn 为什么会得丹毒hcv7jop9ns0r.cn 氯雷他定为什么比西替利嗪贵hcv8jop0ns3r.cn 女生纹身什么图案好看hcv9jop5ns0r.cn
做什么动作可以长高beikeqingting.com 什么是植物hcv8jop6ns9r.cn 拉肚子可以吃什么水果hcv8jop2ns2r.cn 囧途什么意思hcv7jop6ns5r.cn 什么叫骨折liaochangning.com
癌胚抗原偏高说明什么hcv9jop7ns4r.cn 大败是什么意思hcv9jop5ns4r.cn 靶器官是什么意思jiuxinfghf.com 更年期潮热出汗吃什么药hcv8jop8ns1r.cn 皮脂腺囊肿看什么科hcv7jop5ns1r.cn
百度