一个均一个金念什么| 口角炎缺乏什么维生素| 非洲人吃什么主食| 干咳吃什么药好的快| 尿道疼吃什么药| 鸡蛋吃多了有什么坏处| 斜视是什么| lll是什么意思| 智齿是什么样的| 苏联是什么国家| 胎毒是什么| 九孔藕和七孔藕有什么区别| 餐饮五行属什么| 260是什么意思| 心累是什么意思| 碍事是什么意思| 中分化是什么意思| 闭关是什么意思| 出生证号是什么| 蚂蚁属于什么动物| 纳肛是什么意思| 舌苔厚白湿气重吃什么药| 甲鱼和什么一起炖最好| 包皮炎是什么症状| 耳朵内痒是什么原因| 璇字五行属什么| 暴饮暴食会得什么病| 肩颈疼痛挂什么科| 白芷炖肉起什么作用| 小孩呕吐是什么原因| 庙祝是什么意思| 精子成活率低吃什么药| 11月10号是什么星座| 兵戎相见是什么意思| 湿气重不能吃什么| 哺乳期吃辣椒对宝宝有什么影响| 老是打喷嚏是什么原因| 垂髫是什么意思| 小孩子上火吃什么能降火| 什么叫机械手表| 鼻子出油多是什么原因| 泥鳅喜欢吃什么食物| 粽子叶子是什么叶子| 吐痰带血是什么原因| 女生肚子疼是什么原因| c6是什么| 移花接木的意思是什么| 血脂稠是什么原因造成的| 脚底出汗是什么原因| 马虎眼什么意思| 吃什么长卵泡| 我国的国花是什么花| 温州人为什么会做生意| 宝贝什么意思| 什么牛不吃草| ST是什么| 肺气虚吃什么药| 成吉思汗什么意思| 排卵期会有什么症状| 都市丽人是什么意思| 水光针是什么| 印度洋为什么叫印度洋| 大保健什么意思| 麦字五行属什么| 晕车为什么读第四声| 什么是佝偻病有什么症状| 热伤风吃什么药好得快| 白凉粉是什么做的| 承蒙不弃什么意思| 金字旁土念什么字| 手发抖是什么原因引起的| 梨和什么一起榨汁好喝| 老公梦见老婆出轨是什么意思| evisu是什么牌子| 考验是什么意思| 老鹰的天敌是什么| 翠绿色配什么颜色好看| 依达拉奉注射功效与作用是什么| 备孕叶酸什么时候吃最好| 脚痒脱皮是什么原因| 男人下面胀痛什么原因造成呢| 青是什么颜色| ptp是什么意思| 脸肿眼睛肿是什么原因引起的| 断裂是什么意思| 虽败犹荣是什么意思| 后半夜咳嗽是什么原因| 醒酒汤是什么| 蒙古古代叫什么| 维生素c有什么作用| 甲状腺4级是什么意思| 羊肉不能和什么水果一起吃| 旗舰店是什么意思| 率性是什么意思| 脑血管堵塞吃什么药好| 黑棕色是什么颜色| 小鱼缸适合养什么鱼| 包皮发炎用什么药| 有腿毛的男人说明什么| 月经后是什么期| 糖尿病吃什么| po医学上是什么意思| 做肺部ct挂什么科| 手脚冰凉吃什么药| 牛肉饺子馅配什么蔬菜| 正司级是什么级别| 做完胃镜可以吃什么| 吃什么对头发有好处| 教师节什么时候| 五七干校是什么意思| 丁卡是什么药| 软坚散结是什么意思| 变态是什么意思| 上火便秘吃什么最快排便| 漠河什么时候可以看到极光| 钮祜禄氏是什么旗| 什么牌子的蜂蜜比较好| 脱节是什么意思| 事宜愿为是什么意思| 一月十一是什么星座| 肾阳虚吃什么食物| 处女膜是什么样的| 做头发是什么意思| 眼睛发蓝是什么原因| 小狗呕吐是什么原因| 小孩肚子痛挂什么科| 1月24日什么星座| 什么是湿热| 世界上最毒的蜘蛛叫什么| 长期喝苦荞茶有什么好处| 月与什么有关| 金字旁加者念什么| 八哥鸟吃什么| 血清和血浆有什么区别| 女人梦见猪是什么预兆| 梦见摘瓜是什么意思啊| 变态反应是什么意思| 618什么意思| 心率慢吃什么药| 剪刀是什么生肖| 为什么刚小便完又有尿意| 女人梦见棺材代表什么| 肝脏低密度影是什么意思| 嘴唇发黑是什么原因引起的| 妇科杆菌是什么引起的| 慢性萎缩性胃炎吃什么药可以根治| 耐克是什么牌子| 男性前列腺炎吃什么药| 结婚登记需要什么证件| 三叉神经痛吃什么药效果最好| 血小板低吃什么| 脚转筋是什么原因引起的| 送哥们什么礼物好| 戴的部首是什么| 尿里带血是什么原因| 共青团书记是什么级别| 喉咙上火吃什么药| 什么是人工智能| oba是什么意思| 执业药师什么时候报名| 飞短流长是什么意思| 四不像长什么样| 土豆什么时候收获| 口里有异味是什么原因| 糖耐什么时候检查| 50肩是什么意思| 干咳吃什么药好| 第一次同房要注意什么| 狗狗咬主人意味着什么| 什么叫荨麻疹| 割包为什么很多人后悔| 孕初期有什么症状| 利福平是什么药| cdg是什么牌子| 子宫内膜厚吃什么食物好| 歆字取名什么寓意| 膝关节疼痛挂什么科| 阴道炎用什么栓剂| 爸爸的爸爸叫什么儿歌| 厌世是什么意思| 复方新诺明片又叫什么| 喷塑工是干什么的| 什么叫智商| 什么是豆制品| 变蛋吃多了有什么好处和坏处| 什么人不适合去高原| 狗狗呕吐是什么原因| 做梦梦见兔子是什么意思| 嘤嘤嘤什么意思| 股票融是什么意思| 脾大是什么意思| 三文鱼为什么叫三文鱼| 舌头白苔厚是什么原因| 蚊香对人体有什么危害| 牙疼是什么原因| advil是什么药| 乔治阿玛尼和阿玛尼有什么区别| 退而求其次是什么意思| 砭石是什么石头| 90年属于什么生肖| 石墨灰是什么颜色| 孕妇吃什么对胎儿好| 骨加客读什么| 真实写照的意思是什么| 怀孕吃叶酸有什么用| 胸口有痣代表什么意思| rmb是什么货币| 马齿苋不能和什么一起吃| 甲状腺偏高是什么原因引起的| 空调数显是什么意思| 嗜酸性粒细胞偏低是什么意思| 甲状腺弥漫性改变是什么意思| 刚怀孕需要注意什么| 治甲沟炎用什么药膏好| 女人左手心痒预示什么| 十字架代表什么意思| 为什么宫颈会肥大| 门槛费是什么意思| 感冒吃什么菜比较好| 双子座和什么座最配对| 黑色素沉淀是什么原因引起的| 做孕检都检查什么项目| 合盘是什么意思| 妈妈桑是什么意思| 小孩睡觉张开嘴巴是什么原因| 4月25号什么星座| 电饭煲什么牌子好| 鼻子出油多是什么原因| 医院五行属什么| advil是什么药| 身心交瘁什么意思| 政法委是干什么的| 眉茶属于什么茶| 无垢是什么意思| 12月15号是什么星座| 女性脂肪率偏高说明什么| 早醒是什么原因造成的| 四维什么时候做最佳| 雌二醇低吃什么补得快| 宝宝大便有泡沫是什么原因| 肚脐眼上面痛是什么原因引起的| 颈椎生理曲度变直是什么意思| 自己是什么意思| 死了是什么感觉| 内蒙有什么特产| 什么茶刮油| 喉咙痛吃什么药好得快| 下午五点到七点是什么时辰| 小孩子发烧手脚冰凉是什么原因| 黑洞长什么样| 女装什么牌子好| 甲状腺疾病有什么症状| 黑猫警长为什么只有5集| 推测是什么意思| 肺气囊是什么病| 炖牛肉放什么调料| 脚气挂什么科室| 臣服什么意思| 百合长什么样子| 工作室是干什么的| 自然什么意思| 结节性甲状腺肿是什么意思| 腿上无缘无故出现淤青是什么原因| 唐氏宝宝是什么意思| 百度
BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations High Performance Serverless with Rust

我县成功举办农药零增长麦田除草试验示范...

51:07

Summary

百度 伴随寿命增长,人们的老年时期也将延长。

Benjamen Pyle explains how to achieve high-performance serverless applications with Rust and AWS Lambda. He discusses structuring multi-Lambda projects with Cargo, leveraging the AWS Lambda runtime and SDK for efficient development, and the importance of infrastructure as code for repeatability and automation.

Bio

Benjamen Pyle is a technology executive and software developer with over 20 years of experience across startups and large enterprises. He is Co-Founder and CEO of Pyle Cloud Technologies, an AWS-focused cloud consultancy specializing in cloud strategy, architecture, training, and cost optimization. He’s also an AWS Community Builder.

About the conference

Software is changing the world. QCon San Francisco empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Pyle: We're going to talk about high-performance serverless with Rust. If you've come and are familiar with Rust or are just getting started or been doing it for a while, you know that Rust is lauded for being highly performant. It's extremely safe. Rust is known for being able to build applications that have fewer bugs, fewer defects. It's got a solid developer experience. Pairing it with serverless is not necessarily something that you would normally think of.

Rust is often thought of as a systems programming language, something that's networking, traffic routing, storage, those sorts of scenarios. What I want to talk to you about is how we can pair it with AWS's serverless, and specifically Lambda, and to bring together a really nice, performant, pay-as-you-go, no infrastructure to manage solution that's deeply integrated with other cloud services. I don't come from a systems engineering background. I've been doing Rust off and on for about a year, but I've got about 8 or 9 years with AWS Lambda, which just turned 10 years old recently. I was extremely curious about coming from a different background and how I could start working with Rust.

Background

Who am I? I've been doing this for quite a while, 25 years in technology. I remember the dotcom days. I've been everything from a developer all the way up through a CTO. I've got a bunch of different roles in my background. I'm a big believer in serverless first, hence the talk. Serverless is more than just compute, and we'll talk about that. I'm also a big fan of things that are compiled, also talking about Rust. I'm an AWS Community Builder, so it's a global network that's sponsored by AWS that focuses on evangelizing and talking and writing about their technology. I'm co-founder of a company with my wife, who serve and help customers with AWS and cloud technologies.

Serverless Ground Rules

Before we get started, I want to have some serverless ground rules. Forget what you've read on the internet. There's lots of different descriptions about what this might be. For the balance of this talk, when I mention serverless, these are the things that I want you to be thinking about. Serverless by nature, again, in my opinion, has nothing to provision and nothing to manage. What I mean by that is that I don't have to worry about standing up virtual machines. I don't have to worry about things being available for me at any point in time. I don't have to worry about, do I need to update, or patch, or manage an underlying environment? With serverless, things will scale predictably with usage in terms of cost.

For every event that I handle in serverless, every payload that I process, every CPU cycle that I burn, I'm going to be paying for that. On the contra of that, any time I'm not doing those things, I'm also not paying for that usage. Contra that with something that's always on, serverless is more always available. However, you also don't have to deal with planned downtime. If I've got an application that's written and hosted by AWS's Lambda, I'm not going to go ahead and tell my customers, on Saturday from 3 p.m. to 5 p.m., Amazon's going to update Lambda, therefore you can't use this during that time. That isn't the case with serverless. There is no planned downtime.

One of the really sticking points that I like to mention about it is it's ready to be used with a single API call. A lot of times when working with services in the cloud, you'll go hit the button, start, you'll wait 10 minutes only to find out your service is now ready. With Lambda, this is as simple as dropping some code in the environment, give it an event, it's up and running. I'm not waiting. These are some things I want you to be thinking about as we dive through the rest of the talk.

Three Keys to Success

I'm a year into my Rust journey, but about eight years into using Lambda at all different scales. Everything from hobby projects all the way up to hundreds of thousands of concurrent users that are running against the systems. First and foremost, these are some tips, and three of them, that I think are extremely important as you get started working with Rust and working with Lambda. At the tail end of this, we'll get into why I believe that it's the high-performance choice. First up, we're going to look at creating a multi-Lambda project with Cargo, why we do that, what that supports. We're going to look at using the AWS Lambda runtime and the SDK that's associated with it. We'll talk real briefly about repeatability and infrastructure as code.

1. Multi-Lambda Project with Cargo

First up, creating a multi-Lambda project with Cargo. I want to walk through the way that I like to think about organizing my application, and very specifically an API, as we're going to go throughout the rest of the talk. There's a couple of different thoughts here. If you have any experience with Lambda or have read anything about using Lambda for APIs, in a lot of cases, people will talk about these things called Lambdaliths versus a Lambda per verb. At its fundamental level, you can think of Lambda as a micro nano-compute service. You might have this microservice which is composed of multiple gets, multiple puts, post, deletes, an entire route tree that might support your REST API. My preferred approach with working with Rust is to build out one Lambda per each verb and each route that I'm going to support. If you're familiar with something like Axum or Warp, you may have a route tree that's exposed over there. We're going to talk about how to expose routes over individual Lambda functions.

Then, just to round out the rest of the project, but we're not going to dig too much into it, is, I also like to deal with change individually in a Lambda as well. We'll do this for a couple of reasons. First and foremost, at the keynote, Khawaja talked about blast radius and talked about de-risking as you're getting into a new project. By carving up my Lambda functions into these nanoservices, essentially, I'm de-risking my changes, I'm de-risking my runtime. For instance, let's say that I've got a get Lambda that is performing a query on a DynamoDB table, and I've got a defect in that get Lambda, my put, post, delete, patch, whatever could be completely isolated from that get, therefore I can just update that get function without disturbing any of the rest of my ecosystem. I've firewalled off my changes to protect myself as I work through the project. However, that does tend to create a little bit of isolation which then can cause some challenges when thinking about code reuse.

This is an example of a typical project that I've got that I want to walk through. The code will be available at the end. There's a link out to it so you can clone it and work with it. It's in GitHub. The way I like to organize this project is structured in two halves. Up at the very top, you've got this infra directory, which is where my infrastructure as code is going to be, which we'll talk through at the tail end. Then I like to organize all of my functions up under this Lambdas directory. If you notice, I've got each of the individual operations or routes that are exposed underneath those folders. I've got delete by ID. I've got a get-all. I've got a put. I've got a post. Each of those projects are essentially Rust binaries. When we get to the third section, we talked about infrastructure as code. It's going to compile each of these down to their own individual binaries for execution inside Lambda.

Then at the very bottom, I've got a shared library, which we'll talk about. It's just a shared directory. We're going to look at how we can organize our code to be able to be reused across those projects. One thing I want to demystify a little bit is that Lambda and working with Rust isn't really any different from working with other Rust projects. This is a multi-project setup. I can take advantage of Cargo and workspaces so that I'm able to organize and reference my code across these different Lambdas.

As you can see, I've got the same five Lambda: post, get, delete, put, and get-all. They're going to be able to access that shared Lambda project. Why do I want to do this? Thinking about how to access code, if you're familiar at all with Lambda, there's a concept of layers in Lambda. Layer Lambdas are super useful for TypeScript. They're useful for Python. They're useful for languages that can bring in references on demand. We don't have that dependency with Rust, so things are going to get compiled in. What I like to do is I like to organize all of the different things that I'm going to need for my application that might be reused across those different Lambda operations. We'll talk a little deeper about AWS client building. I also like to organize my entity models. If I've got domain objects or I've got operations that might be core to my business, I might put this in my shared library. If I've got data transfer objects for my API that might be reusable across any of those functions or across those operations, I put them there.

Then if I've got any common error structs, that I may want to treat errors and failures the same across my boundaries, I can put that in that location. Again, just to show that there really isn't anything special about pairing these two together, building a shared library that works with Lambda is the same that you would with any other Rust project. It's just marked as a lib or a library. We include our required dependencies. We've got our shared package details that are required for that project.

2. Using the Lambda Runtime and SDK

Next up, I want to drill into a little bit about using the Lambda runtime and SDK. Working with Lambda is a little bit different than working with maybe a traditional container-based application. I want to make something clear right now. The AWS Lambda runtime for Rust is different from what you might be familiar with from the runtime if you go to the AWS platform. If you were to go to AWS Lambda right now and want to go build a new Lambda function, Amazon is going to ask you, which runtime would you like to use? They're going to list TypeScript. They're going to list Ruby. They're going to list Python. They're going to list .NET. It's going to have Java. You're not going to see a runtime for Rust. The reason is that the runtime that AWS is talking about is the runtime that your language requires in order to execute your code. As you all know, Rust is not dependent upon a runtime.

Therefore, you're going to select one of the Amazon Linux containers or the Amazon Linux runtime. However, there is this notion of a Rust runtime, which we're going to talk about. It's extremely important because Lambda at its core is nothing more than a service. Its job is to handle events that are passed into it, whether that's a queue, a stream, a web event. It's going to communicate with your code over this runtime API. It also exposes an extensions and telemetry API, which is not super important for this discussion.

The runtime API is very important because it is going to handle what event comes in. It's going to forward it on to your function. Your function is then going to deal with that event that came into it. It's going to process, do its logic, and then it's going to return back out to the Lambda service so that the Lambda service can then do whatever it was going to go do upstream. Why the Lambda runtime is important, because it is an open-source project. What its job to do is, it's going to communicate back and forth with that Lambda runtime API in order to serialize and deserialize payloads for you. It's going to basically set an event loop to deal with those changes that are coming in on your behalf.

Then you're going to be able to just focus on writing your function code, which we'll look at. You might be wondering, what kind of overhead does something like this produce? Not nearly enough that you should go out and try to build this on your own. I'm not going to go out and build this on my own. The amount of overhead that's involved here is nominal, and I'll show a little more about that in the performance section. It simplifies your life significantly, because like I mentioned, it's able to communicate back and forth with the Lambda API. It does contain multiple projects inside of it. It's got provisions for dealing with an HTTP superset, which is what I'll show in the examples. It is capable of working with the extensions API.

Very importantly, it's got an events package that's inside of it. Anytime you work with a different service in AWS, the payloads are going to be different as they come in. An SQS payload is going to be different from an API gateway request payload, which is going to be different from a S3 payload, and on and on. This library has provisions for us to be able to work with those out of the box, which, again, you could do yourself, but why if it's already done for you?

To drive in just a little bit about what a Lambda function looks like, and I want to really show you how simple some of this is, because the thing I love about writing Lambdas is, is that I don't really feel like if I make a mistake, I can't unroll it pretty quickly. A lot of times I'm writing 100, 150, 200 lines of code to provide an operation. My brain can reason about 200 lines of code. Sometimes you open a large project and there's 15 routes or 20 routes or whatever it is. It's a lot to think about and get your head around conceptually. Lambda to me feels like the nice bite-sized chunk of code for me to be able to work with. The function is asynchronous main, which is enabled by Tokio.

As my main runs, it's a regular Rust binary, like I mentioned before. I'm going to do things in main to initialize things that I want to be able to reuse down the line. I'll get into this in a little bit about talking about performance, but what's interesting is there, right there where the cursor is, initializing shared clients and being able to reuse those on subsequent operations saved me the time of having to stand up dependencies ahead of time, or at execution time. Because main will get run once, I will get dinged for that setup.

Then every time my function handles an event, I won't have to do this again. If I'm saving myself 100 milliseconds, I do it up front, I never have to pay that cost again. Because when we look at the anatomy of a simple handler function, this is my get operation right here, and what it does is it's going to be querying DynamoDB, it's going to be handling a specific response. Then, it's going to be sending that response back out to the Lambda service, which is going to go on to wherever it was supposed to go. This code right here gets executed every time. It's going to happen on the first call, and it's going to happen on the millionth call. My main is only going to get executed just that one time. I want to do my best to make sure that I set up what I want to, so that I can benefit from reuse inside of my subsequent requests.

The second half of tip two is the AWS SDK for Rust. Everyone, I'm sure, knows what an SDK is. I'm not going to go through the nitty-gritty of what that is. I do want to highlight why I believe if you're starting a journey or getting into writing Lambdas and serverless compute with Rust, I think the SDK is important. There are some nuggets inside of here that I think nullify, especially if I'm using something like DynamoDB, I don't worry so much about data access, because it's taken care of with the SDK, which we'll look at. I mentioned that Lambda and Rust, I've got a really rich ecosystem to work with. Being in AWS means that I can get a whole lot of other capabilities simply by connecting to that service.

Some of these that are listed up here are serverless, Amazon SQS, AWS Fargate, there's Lambda again, S3, and then others may not be as serverless. The AWS SDK for Rust is going to give me access to be able to connect and communicate with any of the different AWS services that I want to. In order to do that, I want to think about the fact that the SDK wraps, like I mentioned, each one of those individual services. Why is this important? Again, I go back to my blast radius. If I have a get operation that is a Lambda function that only works with DynamoDB, I'm only going to pull in the DynamoDB client. If my post operation needs to write to DynamoDB and then also send a message on EventBridge to put a message on their bus, I'll bring that in.

As I build these functions, I can specifically carve out what it is that I need to be able to do to support that function. On the addition of that, if you're thinking about security, the AWS SDK for Rust deals with identity and access management for you. If you've got familiar with AWS, you know that identity and access management is very important. Every service has its own set of permissions that it's listed for. If I want to execute a get with Dynamo, I either need to grant*, which is a really bad idea, or I just want to grant get item.

Or if I want to work with an index, I might want to give it permission to access that index. By being really fine-grained again with my Lambda functions, being very specific about the things I'm pulling from the SDK, I'm going to get this nice, tight, really well-secured block of code that's going to do just what I want it to. Then, lastly, since it is a web service, Amazon Web Services, you'll have these nicely defined structs for dealing with request and response.

We're going to spend tons of time talking about SDKs. A DynamoDB client, every one of the clients that I've worked with from SQS to S3 to DynamoDB, they all have a similar shape and structure to them. You're going to specify the region or the provider, and again, if you're using this in your environment, it's going to have local credentials. If you're using it out in the cloud, it's going to have a service account for that Lambda to be able to operate with. What's really nice is that if you are doing any local development, which I like local development, even though I love cloud development, you can specify local endpoints like I'm doing right there to be able to access local services. This client becomes your gateway into all of the API operations that you're going to use throughout the life cycle of your Lambda function. What might those operations be?

Like I mentioned, for Dynamo, it's going to be get. You're going to have puts, you'll have deletes, and then they'll usually support batching of operations should you want to. What I really like is that I can go read the API specification, and the Rust SDK for AWS looks very much like that AWS API specification. Just to wrap this section, make use of the tools available. I would not recommend, especially if you're new to Rust and new to AWS, you trying to write your own AWS runtime for Lambda. He worked for AWS. I'm not sure if he's still there, but a big part of what's there, they spent a lot of time making that work.

Then the AWS SDK for Rust is going to give you consistency, durability, and acceleration as you're building. Again, there's lots of customers in production that are already using this. Why try to reinvent the wheel? The fact that you can feature flag and only toggle on what you want in your Cargo file, makes your ability to really fine-grain what you're pulling in in your builds.

3. Repeatability in IaC

The last piece of these three tips is I want to talk to you about repeatability with IaC, so infrastructure as code. If you're new to the cloud or even you've been in the cloud for quite some time, you may not have heard of infrastructure as code, but if you're familiar with it, this is why I think it's super useful and I want to talk to you about how to use it with AWS and Rust. Infrastructure as code is a way to be able to build your infrastructure or build your services that you're going to use in AWS in a programming language of your choice or a markup language of your choice if you want to.

By being able to do that, it's going to give me repeatability. I don't have to remember the seven steps I click to make it work when I go to QA or when I go to production. I've got code that I can execute and run over and over. It may seem like a pain to get started, but you're going to have speed as things increase, so, as I pull more services in. I'm going to be able to share this with my developers.

The last thing as a developer that I want to do is be dependent to infrastructure. I want to be able to move at my own pace. I'm not saying you shouldn't move with infrastructure. I'm saying that by being able to write my own infrastructure declarations in code helps me move faster. It also builds more buy-in as you get up to the cloud and as you start running load. Then, it's a great foundation for that partnership with infrastructure for automation. As you start to build continuous deployment, your IaC becomes extremely critical.

There's a lot of choices in the serverless landscape. My kid's favorite is SAM the Squirrel, for obvious reasons. That's the serverless application model. That one has the best logo. Terraform, it's an AWS product. It supports YAML, and it's fantastic. Terraform is another one which is a popular provider. The bottom left is the serverless framework. If you spent time in the serverless space, they've been around for quite some time. There's Pulumi down there on the bottom right. There are others. These are just the four competitors that I often see. I'm a proponent of the Cloud Development Kit by AWS, the CDK. I'm not a huge fan of writing a ton of YAML for this stuff. With CDK we can build with TypeScript, we can build it with other languages should we want to. You can't yet build it with Rust which is a drawback. How do I embed a Cargo build system or a Cargo patterns with CDK?

First off, there's another really great open-source project called Cargo Lambda. Cargo Lambda is a subcommand for Cargo, which will help you do builds, does cross-compiles. I believe it's supported by Zig. Release optimization, stripping of symbols, minification where it can, things of that nature. Local development support which we'll talk about here, which I think is fantastic because again you always want to push your code to the cloud to be able to test. It will leverage Docker or your local build environment, which is great. If I'm trying to support ARM and I'm only on an x86, I get my local builds happening that way. Or if I'm going out to a build system in the cloud that doesn't support it, it'll be able to do that as well.

Then we'll easily be able to embed it to support automation in our build pipeline. How do I embed it with my chosen CDK? Again, there is an open-source project for this that is an extension of the Cloud Development Kit. Just real briefly, the Cloud Development Kit supports three levels of abstractions. Level 1 is basically raw bare metal CloudFormation. Level 2 which is about what this is, it's going to be essentially one service wrapped in an abstraction for you.

Then level 3 you're like compounding multiple different services together all in one package. If I wanted to put together a Lambda plus a DynamoDB table plus an API gateway plus an SQS, that would be a level 3. This Cargo Lambda library will support my RustFunction. This is TypeScript. I give it a FunctionName. I give it a manifestPath which is simply the directory that's going to point to the TOML file for that library. Again, if I had five functions, I'm going to have five of these things. Memory size with Lambda. I'll show you here in a little bit. Memory makes an impact on cost, compute, performance. Architecture can be ARM, which is Amazon's Graviton, or I could have done x86. Then any environments that I want to pass in. Environment variables that my function might need to be able to support its operation.

I mentioned local testing. Cargo Lambda supports local testing by allowing you to pass elements that look like the events that your function is going to work with. The very first one is invocation of my Lambda. If I only have one Lambda in my project, I can invoke my Lambda with that payload for foo.bar, if that's what my event looks like. If I've got a multi-Lambda project like we've been talking about, I can specify an invoke, and then give it the name of the Lambda that is going to go back actually to the project binary name that I gave it. Foo.bar is great, but what if I have like this really complex payload? I can also then pass in a data file. As a Lambda developer who's targeting Lambda for compute, you're going to end up with collections of different payloads. Because I want to test a good payload, a bad payload, all these different combinations.

Again, this can be part of an integration test, it can be part of your build pipeline, it could also just be part of your local development as you get your arms around what's going on. Lastly, maybe you don't know what your payload looks like, which is totally fine. The Cargo Lambda project has got a set of API example payloads, so all the different event structures that I mentioned inside of the Lambda runtime project are also going to be available. The team that supports this provides all those payloads for you to be able to test with.

I've mentioned Cargo Lambda. I've mentioned CDK. We get a build, what does that look like? CDK, because of that RustFunction that I defined, is going to drop out, run the Rust, run the Cargo build, it's going to run Cargo Lambda Build. Then it's going to generate a couple of things for me. CDK is going to do one part and then Cargo Lambda is going to do the other part. First thing is the bottom example, that is essentially the CloudFormation JSON that gets executed. All infrastructure changes in AWS happen through CloudFormation, for all intents and purposes. In this case, an automation is going to happen through CloudFormation. CDK is going to generate all of the different settings that I put together in my Lambda function. If I specified 256 megs of memory and I want to be on ARM and I want these environment variables, and I want these permissions and these policies, that's going to all get emitted for me in this file that will get executed.

The second part, which is the Cargo Lambda part, in our case we have five Lambda functions, it's going to generate five different packages, which are ZIP files, that are going to be stored out in that directory. If you noticed in there, if you can see that, you're going to see an entry or a handler called bootstrap. That's actually the name of the executable that gets generated inside of the ZIP file that is changeable, but there's no real reason to change it because they're all isolated. As you can see, by pairing Cargo Lambda with my Lambda project, I get this nice, clean isolation. I'm going to only get what I want that's changed. Lambda is not going to update things that I haven't mutated. I also don't have to deal with the generation of all this CloudFormation as I go up to the environment.

Automation, repeatability are key, like I mentioned. I really believe that by investing in IAC upfront, you're going to go faster as you get bigger. Maybe it's not a big deal for one function, and you're just testing. As you start to build real projects, you start to get 10 Lambdas, 12 Lambdas, 14 Lambdas in a project. This automation is going to pay dividends because you're going to have other resources involved in it. Your Lambda is going to need DynamoDB. It's going to need S3. It's going to need SQS. It's going to need all of these other services that it's going to want to connect to. By using this, Cargo Lambda gives me access to local deployment. It gives me that cross-compilation, which is fantastic. Then I get release ready output.

Recap - 3 Tips for Getting Started

Just to recap what we talked about so far. Three keys to success, create a multi-Lambda project with Cargo. I'm prescriptive about it, but at the same time I really struggled with this as I started to get going with Lambda and with Rust. Just, how do I organize it? Do I put everything in one big monolith? Do I break them up? I found from a year of working with it and then from experience with Lambda, I prefer the isolation of one route, one verb per Lambda operation. You will spread out quite a bit. Again, I go back to the fact that so many things I won't change very often. A lot of times you have stuff that you're not even going to visit, and things that are active development. I really prefer that isolation. Use the AWS Lambda runtime and SDK.

Again, you don't want to deal with Serde and deserializing the different payloads in and out of your functions. Take advantage of that project that's out there. I know from experience and have seen customers run really nice size load with Lambda and with Rust, that are using this project as an intermediary. Take advantage of the SDK. Don't try to write your own wrappers around AWS services. The piece that most people skip over, infrastructure as code, Cargo Lambda makes it so simple to get started early that there almost really isn't any reason why not to, because it's going to pay you off so many dividends.

Rust is the Lambda High-Performance Choice

I've talked for about 30 minutes about this project orientation, infrastructure as code, but the talk was about high-performance serverless with Rust. I want to talk right here at the end and leave you with some things. Rust to me is the only high-performance Lambda choice. It's a pretty bold statement. I've run Go. I've run Java. I've run .NET. I've run TypeScript. I've not run Ruby or Python, but I don't believe they're any faster, just anecdotally, and we'll have some data here. If you're looking to squeeze the most out of your compute, your usage, your price, Rust is the way to go. Why is that? How do you measure performance? We'll talk a little about two things that are controversial in the serverless world. We're going to talk about cold starts and warm starts.

If you read about cold starts on the internet, they happen 80% of the time. They're the worst thing ever. Lambda can't be used because it's got this cold start problem. Cold starts per AWS's research, happen less than 5% of the time. Most of the common runtimes are extremely adept at being able to deal with them. I'm going to show you how Rust skips over them. The way Lambda works is, as you check in your ZIP, your binary, your package goes up and it goes into the Lambda repository. I'm going to simplify this quite a bit. I'm not going to get into too many of the nuts and bolts. Your code is sitting there waiting for execution. It's just hanging out. You're not getting charged. Again, remember, serverless you only pay for what you use, not paying for this ZIP file to sit in Amazon storage.

The very first execution you get, that Lambda runtime is going to grab your code. It's going to pull it down. It's going to put it in the runtime. It's going to run your main function. It's going to do whatever initialization it has to do. The internet and AWS too, calls it a cold start, starting from nothing, essentially. It's like starting your car on a cold day. I'm from Texas. We don't get many cold days. I hear when you start your car on a cold day that it takes a little while to warm up. Subsequent executions, everything happens the same except for all the initialization stuff. Your function's warm. It's sitting in the environment.

All it's going to do is run that handler code and you're going to get what's called a warm start. Up until a couple of years ago, this was a significant gap in the duration here. This could be multiple seconds up to 10 seconds in some cases for this to happen. Most of the modern languages are now down to a second and a half, less than 2 seconds. Think of an API though, if 5% of the requests, out of 100 requests, 5 of my users might wait a second for that first container to come up and go. The way Lambda works too is that I don't just get one instance, I'm going to get maybe hundreds of instances of my function running. For every first time that 100th of instance runs, I'm going to get a cold start. We really don't know how long warm containers last, around 15 minutes of inactivity is about the rule of thumb, but that's not really documented anywhere.

Let me show you why Rust is the way to go here. This is an open-source project. Essentially what it does is every morning it runs a Step Function, which is another really cool serverless service by AWS, written by an AWS engineer who used to be a Datadog engineer. What it does is it runs this Hello World, and then it checks the cold start duration, warm start duration, so a little snowflake and the lightning bolt, and then the memory used. Left to right, top to bottom, the most performant language from a cold start standpoint is C++ 11 on provided Amazon Linux 12, 10.6 milliseconds.

Rust on AL2023 is right there at 11.27, but actually faster than C++ 11 on Amazon Linux 2023. It's at 11.27 milliseconds, 13 megs of memory, and 1.61 milliseconds on a warm start. Let's contrast that, because I'm going to do that here, on nodes way down here in the bottom right, 141 milliseconds. This is a Hello World. This is just like the simplest of the simple. It's almost 14 times on cold start. It's almost four or five times on memory. It's close to 13 times, 12 times on warm start. It's a significant difference in the performance on that. Simple, Hello World. I've got some graphics here. There's an example that I ran that's based off that repository. It's running 100 concurrent users.

Again, the load profiles as you go up, pretty much the same. What's interesting about this is that this is that DynamoDB operation. That top graph is cold starts. That's executing. It's loading the environment. That's checking the IAM credentials. That's executing a DynamoDB query. Then it's coming back with a result that's coming through. At a cold start, somewhere between 100 milliseconds and 150 milliseconds. Imagine that if I got on a cold start 13 or 14-time performance difference, think about how that extrapolates out to other languages. Again, 5% of my users, but if 5% of my users get 100 milliseconds, 150 milliseconds, or they get a second, if it's an asynchronous operation, nobody probably pays attention.

If I'm waiting on something to happen in a UI, I don't know about you, but if my bank takes more than a second, I'm pretty hacked off. I'm like, what in the world? I'm hitting refresh. Then go down and everything gets a little bit more insane as you go lower. The average latency is really sitting at around 15 milliseconds, and min latency is less than 10 milliseconds. Again, less than 10 milliseconds, execute a DynamoDB query, serialize, deserialize, then back all the way out. It's pretty crazy.

If you've been around the Rust ecosystem for a little while, you've probably seen this graph. I know that it was big at AWS re:Invent last year. It's basically a graph of different languages and their energy and usage consumption, and how they stack up. Again, Rust is always up at the very top. There it is in the middle graph, is like the total some factor that's just greater than C, just a little bit greater than C. There's Go at almost three times. We get all the way down here to Ruby, TypeScript, that's like, again, 46 times from an efficiency and a total computation standpoint.

Rust built Lambda functions are just going to be naturally faster. They're going to be naturally faster on an order of magnitude that's significant in some cases for your user base. If user performance wasn't enough, and if we go back to my beginning statement on the serverless categorization, that cost and usage are very closely linked, the argument's been made that cost, usage, and sustainability is also extremely linked.

I'll leave you with this last slide just to illustrate exactly why Rust is that choice if you're looking for high-performance. Lambda compute or Lambda pricing is basically done like this. Go out and there's a calculator that will show you, but I'm going to simplify the math. It's basically your total compute times the memory allocated. I mentioned earlier that you can specify memory on your Lambda function. Memory can be, I think at 128 megabytes all the way up to like 1024, just a large number, and you can go up in 1 meg increments. CPU usage correlates and tracks with memory usage, even though you can't set it. The duration that's happening for the memory that I've allocated is going to equal my total cost. It's a pretty simple formula, but you can think of it as how long it ran and how much resource did I allocate is going to track back to cost. Broken down at really granular sense there, 1 millisecond, 128 megs, cost that top number, 128 milliseconds at 512 megs will cost that number.

The reason why that 512 number matters is, first of all, Rust is not going to need 512 megs of RAM to run itself. I found its sweet spot to be between 128 and 256. This is a really fun calculator about how to look into that. However, other runtimes, and I use the runtime as in the runtime, like TypeScript, .NET, Java are going to require more memory. I'm going to contrast TypeScript, which runs better with more memory than Rust. A simple one request a second, this is just constant throughout the month, 1 meg a second at 128 megs of RAM is going to cost me 60 cents. Rust is going to cost me a penny, for a difference of 60 pennies.

One request per second, if you just got consistent traffic coming in from the internet or from an API call. That's the way the cost is. At 512, that number gets significant, if $2 is significant and 3 cents is significant. Again, steady stream of 100 requests per second for an entire month, my TypeScript function is going to cost me $61 versus $245 for 512. My RustFunction still costs me 3 bucks at max, which I'm not going to run it at max. I'm going to run it probably at 128. I'm really almost comparing 245 against 86 cents. 100 requests a second, I've got some Lambda functions that some peaks may run 5,000 or 10,000 requests a second. Just to look at it at 1000, TypeScript's going to cost me at 512, $2,400 a month. Again, if I've got functions carved up in a way that I have gets, puts, post, deletes, I could have a $15,000 bill to run TypeScript if I had any traffic.

At Rust, I could have 50 bucks. Because your argument is always made with Lambda, Lambda costs more. Lambda is expensive. Lambda is this, Lambda is that. The other thing about Lambda though, is I don't have to pay for EC2. I'm not paying in for people to manage containers and networking. I have a lot of simplification. If nothing else, my argument for Rust is that I can stave off the need for a container except for some really high demand scenarios, if I'm building APIs. I can do that because I'm not going to be paying that expensive cost to run for those high memory loads for those durations that are 10 to 14 times higher versus some other languages.

I hope I've been able to show you that pairing Rust with Lambda, it may be an interesting use case, but at the same time I do know from experience with customers that this is out in production. I've used it in production. I've used it in healthcare. I've seen it in some other places. If you pair these two together, you will get the beauty of all the things that are Rust, as well as all of the beauty and fun that is Lambda and that is serverless computing. Just for references, here's the sources from the presentation. That's my blog at the top that's got everything on it. There's a project called serverless-rust.com, which has a bunch of examples and patterns of how to get started with serverless and Rust focused around AWS. That is another really cool project. There is the repo that's got the entire API that we just walked through, some of the AWS pieces, Cargo Lambda.

Questions and Answers

Participant 1: You focused on AWS quite a bit. Have you had experience with other cloud providers, in serverless?

Pyle: I have not used Rust with other cloud providers. In serverless, yes, I've done a little work with Azure and a little bit of work with Google, but most of my focus has been in AWS.

Participant 1: What was your Azure experience?

Pyle: My Azure experience was several years ago working with just functions and App Service. I've just started playing around a little bit with that containers or container apps, because of the interesting nature of them.

Participant 1: The second bit was the Cargo Lambda demand. You positioned it as a testing tool, but it seems there is some ZIP files. What does it actually do?

Pyle: It'll actually broker the compilation for you. It'll sub out to Zig and it'll build that Rust binary for you based on the architecture that you've specified. Then it allows you to run and test your stuff locally. Then pairing it with CDK gives me just this abstraction that I can specify, here's my Rust project. Cargo Lambda goes out and deals with building and packaging and getting it ready to go for the Lambda runtime.

Participant 1: It's the local test angle. It seems to divvy from the standard Rust testing tool. Why is that happening?

Pyle: It's because it's focused really around Lambda, because testing Lambdas from an integration standpoint is really focused around the event testing versus the component level testing. When you put your Lambda out, it's responding to these different JSON payloads. What their goal was, was to be able to give you a nice harness to be able to test against it. Because the only other way to do that is to bolt into the serverless application models sandbox, and it gets a little bit clunky.

Participant 1: It instruments as if you were in the cloud environment with the events.

Pyle: It's just local. It's just running the project locally and then executing the payload off through the handler that I showed you.

Participant 2: You make writing Rust look way too easy, but at least when I write code, I make a lot of bugs. I was curious what debugging, especially with Lambda, you're going from one Lambda to another, to another, and debugging can be even more cumbersome. I'm curious if you have any tips on just complex debugging and profiling when you're running Rust on Lambda.

Pyle: A couple of different layers. I have successfully, and there are some steps with Cargo Lambda that you could attach a debugger to it, so that I can run Cargo Lambda locally, that I can simulate that payload. Then I can attach to a debugger inside of my environment. I've tested that with VS Code and Vim, and it works really well. There are some instructions on how to do that. The second way is the old tried and true events and prints, and going against some of that. The third way is that once I'm up in the cloud, it's probably going to be using AWS tooling paired with something like Datadog so that I can get a little bit more observability into my application, especially as I start to get some traffic against it.

Participant 3: [inaudible 00:48:52]

Pyle: I believe you can do it with API gateway perhaps would be the way to go. You might be able to then connect Rust up to API gateway and handle that event. It wouldn't be directly against Lambda. It would be through gateway and then gateway would proxy into Lambda.

Participant 4: As a person who's been working with Rust for a year now as you are, how hard would you say it would be to migrate a Java codebase to a Rust codebase.

Pyle: I haven't thought about that. Depends on how big the Java codebase, if you're using Lombok, lots of other interesting dependencies that are in there.

Participant 4: The home screen looked perfect.

Pyle: If you stay out of some of the more challenging parts of the Rust language, I would think it would be pretty straightforward. What I find critical is if I'm a lone Rust person in a world, if I've got two or three people that are doing it with me, I feel like we'll move quicker together. If you've got good Java programmers that have a specific need to pivot and they're like, I want to pivot to Rust. If you went together, I think you'd have some success.

Participant 4: I think that your screen with cost is quite amazing.

Pyle: Yes. Especially if you're running Lambda with Java, and especially if you were trying to pivot from a container-based world to a function-based world, Rust is going to stack up really nicely there.

Participant 4: It does.

 

See more presentations with transcripts

 

BT
虾米是什么意思 早上口干舌燥是什么原因 办理住院手续需要带什么证件 sm什么意思 acd是什么意思
水印相机是什么意思 欣字属于五行属什么 莘莘学子是什么意思 南昌有什么好玩的 坊字五行属什么
什么是凶宅 临床是什么意思 原籍是什么意思 分泌物多是什么原因 猫咪拉肚子吃什么药
缺铁性贫血吃什么药 女孩为什么难得午时贵 梦见活人死了是什么意思 头顶痛吃什么药 为什么喜欢你
益生菌有什么好处hcv9jop1ns4r.cn 芈月和秦始皇是什么关系hcv8jop8ns2r.cn 夏朝前面是什么朝代gysmod.com 什么是三观hcv8jop5ns5r.cn 下雨天适合穿什么衣服hcv8jop2ns1r.cn
孕妇梦见蛇是什么意思zhiyanzhang.com 蚊子代表什么生肖cl108k.com 口臭吃什么中成药hcv8jop0ns9r.cn 右眼睛跳是什么意思helloaicloud.com 女人梦见老虎是什么预兆hcv7jop9ns1r.cn
为什么腋下会长小肉揪hcv8jop4ns6r.cn 玉米什么的什么的hcv7jop5ns5r.cn 白化病是什么原因引起的hcv9jop5ns9r.cn 骨折恢复吃什么好hcv8jop0ns7r.cn 女性尿急憋不住尿是什么原因hcv8jop7ns0r.cn
沙眼衣原体是什么病xjhesheng.com 五月十九日是什么星座hcv8jop5ns3r.cn 甲状腺功能是什么hcv7jop5ns1r.cn 步步为营是什么意思hcv8jop9ns2r.cn 属狗和什么属相不合hcv9jop5ns1r.cn
百度