Forgive me if this is not the right place to comment on the 'Chip Avengers 2023: Chips Act + AI Revolution" podcast ... but from the 45:45 mark one of the participants says:

"If you want to deploy a cutting edge large language model you still need high performance AI accelerators to do anything with it."

Nonsense. First of all scoring is much cheaper than training, and second of all - all this software is designed from the ground up to be highly parallelizeable. Whether you are using Tensorflow or XGBoost or pretty much any of the popular libraries ... if you are using two generation old silicon all you have to do is make your cluster four times as big and bob's your uncle. Dylan, bro, ya gotta step in when the non-techies start feeding each other copium.

As for whether ChatGPT will be able to keep the Chinese from reverse engineering their models or developing models that are just about "as good". Maybe - but it isn't going to be silicon that gets in their way.

Expand full comment

ha james thanks for this! If you want to write this up into a few hundred words you'd be comfortable publishing I'd be happy to run in a 'Friday Bites'!

Expand full comment

Jordan - I emailed you a submission.

Expand full comment