Graviton with Nitro 4 has been quite pleasant to use, with the rust aarch64 musl static target and rust-lld I can build monolith ELFs that work not just on my android via `adb push` and `adb shell` but also on AWS.
AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.
Wouldn't the business impact always be performance per dollar from client perspective? This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
> This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
AWS management is the customer.
Higher compute density, lower infrastructure costs, and higher performance. Those are data center selling points.
The truth of the matter is that your average external customer doesn't really care about CPU architectures if all they are doing is using serverless offerings, specially AWS Lambdas handling events. They care about what it costs them to run the services. AWS management decide if the returns on their investment is paying off and helps them lower costs and improve margins.
Can someone please confirm, is the Graviton an ARM-based CPU or something different? The page mentioned ARM, but I was still a little confused. Are we able to launch a Debian/Fedora using the CPU, or is meant for something different?
As far as I'm aware- if it's called an ARM CPU it's either the v7 or v8 instruction set with the possibility of extra instructions (changes to ARM die) or a tightly integrated coprocessor (via AXI bus, adjacent to the ARM silicon on the same substrate).
There are different Coretex series that optimize for different things- A and X for applications (phones, cloud compute, SBCs, desktops and laptops), M for microcontrollers, and R for realtime.
This doesn't apply if the company has an ARM founder and/or architecture license. (I think that's what they're called) Eg- Apple and their M series SOCs are not Coretex cores, but share the base instruction set- but only if Apple wants it to.
Yup, Amazon supports the 6.11? kernel on aarch64. Most toolchains if you target linux aarch64 static they, they will produce executables that will run on Amazon Linux aarch64 and Android, set-top boxes with 64-bit chips and Linux 3+ it's surprising how many devices a static aarch64 ELF will run on.
Not really: burstable (“t”) instances haven't been updated in years. The current generation (“t4g”) still use Graviton2 processors. I get the impression that they would vastly prefer cost-conscious users to use spot instances.
Graviton with Nitro 4 has been quite pleasant to use, with the rust aarch64 musl static target and rust-lld I can build monolith ELFs that work not just on my android via `adb push` and `adb shell` but also on AWS.
AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.
Wouldn't the business impact always be performance per dollar from client perspective? This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
> This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
AWS management is the customer.
Higher compute density, lower infrastructure costs, and higher performance. Those are data center selling points.
The truth of the matter is that your average external customer doesn't really care about CPU architectures if all they are doing is using serverless offerings, specially AWS Lambdas handling events. They care about what it costs them to run the services. AWS management decide if the returns on their investment is paying off and helps them lower costs and improve margins.
It's an advertisement to investors that they have a new product that's better than their last
Ah I see. What has the world gotten to? (by which I mean businesses should not advertise to raise their stock price)
How else do you fulfill your fiduciary duty to shareholders?
Tweet lies to manipulate the stock, like Elon Musk.
Can someone please confirm, is the Graviton an ARM-based CPU or something different? The page mentioned ARM, but I was still a little confused. Are we able to launch a Debian/Fedora using the CPU, or is meant for something different?
Yes, the gravatons are the AWS arm architecture instances
Thanks, so "standard" ARM we can launch VMs with? I wasn't sure if this was some sort of proprietary ARM chip use for specialized work.
As far as I'm aware- if it's called an ARM CPU it's either the v7 or v8 instruction set with the possibility of extra instructions (changes to ARM die) or a tightly integrated coprocessor (via AXI bus, adjacent to the ARM silicon on the same substrate).
There are different Coretex series that optimize for different things- A and X for applications (phones, cloud compute, SBCs, desktops and laptops), M for microcontrollers, and R for realtime.
This doesn't apply if the company has an ARM founder and/or architecture license. (I think that's what they're called) Eg- Apple and their M series SOCs are not Coretex cores, but share the base instruction set- but only if Apple wants it to.
Yup, Amazon supports the 6.11? kernel on aarch64. Most toolchains if you target linux aarch64 static they, they will produce executables that will run on Amazon Linux aarch64 and Android, set-top boxes with 64-bit chips and Linux 3+ it's surprising how many devices a static aarch64 ELF will run on.
Awesome, thanks for this. Off to build new Ansible deployment scripts for aarch64!
Yes, think AMD vs Intel. Same x86 target but built differently under the hood with potential to optimize for certain uses over others.
It's based on ARM Neoverse V3 cores which are very similar to the latest high performance mobile Cortex X4 cores.
Are they updating the t class instances to t5g as well?
They usually end up upgrading most instance types to new graviton generations, it just takes time to do the full rollout.
Not really: burstable (“t”) instances haven't been updated in years. The current generation (“t4g”) still use Graviton2 processors. I get the impression that they would vastly prefer cost-conscious users to use spot instances.
the -flex suffix variants seem to be the new spiritual successor to the t burstable class.
eg c7i-flex.large, etc.
> With 192 cores per chip
Just like AMD Epyc.
> and 5x larger cache,
Larger than what ? 16k ?
Larger than the same cache level on its predecessor (the Graviton4).