Here's my compilation of AI learning resources - I think some of the ones I've collected will be a better place to start for most people.
I categorized them into what kind of goal they're relevant for - building products, deploying custom models, or self study towards ai research science and research eng roles.
Yes, there are tons of resources but I'll try to offer some simple tips.
1. Sales is a lot like golf. You can make it so complicated as to be impossible or you can simply walk up and hit the ball. I've been leading and building sales orgs for almost 20 years and my advice is to walk up and hit the ball.
2. Sales is about people and it's about problem solving. It is not about solutions or technology or chemicals or lines of code or artichokes. It's about people and it's about solving problems.
3. People buy 4 things and 4 things only. Ever. Those 4 things are time, money, sex, and approval/peace of mind. If you try selling something other than those 4 things you will fail.
4. People buy aspirin always. They buy vitamins only occassionally and at unpredictable times. Sell aspirin.
5. I say in every talk I give: "all things being equal people buy from their friends. So make everything else equal then go make a lot of friends."
6. Being valuable and useful is all you ever need to do to sell things. Help people out. Send interesting posts. Write birthday cards. Record videos sharing your ideas for growing their business. Introduce people who would benefit from knowing each other then get out of the way, expecting nothing in return. Do this consistently and authentically and people will find ways to give you money. I promise.
7. No one cares about your quota, your payroll, your opex, your burn rate, etc. No one. They care about the problem you are solving for them.
There is more than 100 trillion dollars in the global economy just waiting for you to breathe it in. Good luck.
I want to be careful throwing shade because I apparently know some people involved in this, and they are smarter than me, but this is pretty basic. Take a whack at the 2019 final exam for a flavor of where they're at.
The CORS content is solid. But the vulnerabilities themselves are dated. As a threshold concern, a 2020 web security class needs to be teaching about SSRF, the most important current web bug class. OAuth flows would be another thing I'd hope to see covered.
There's always going to be new stuff that can't be covered; I understand how these curricula work†, and don't expect HTTP Request Smuggling or DNS fingerprinting on the final. But system("cat ${input}")?
† The network security course taught at major CS research universities was written at one place like 10 years ago and shared and handed down from semester to semester; I assume something similar happens here.
PS
3 hours is a bananas amount of time to get for this exam. We're speedrunning it on Slack and the median is closer to 15 minutes (albeit without writing careful answers). If this were a commuter school with students who don't come in knowing how to code, sure; but this is Stanford CS!
1. learn how to communicate: being a good developer requires as much (more?) social as it does technical skills. I would recommend formal training and practice.
2. question all assumptions.
3. there is no silver bullet (read mythical man month).
4. fight complexity and over-engineering : get to your next MVP release.
5. always have a releasable build (stop talking and start coding).
6. code has little value in of itself; only rarely is a block of code reusable in other contexts.
7. requirements are a type of code, they should be written very precicely.
8. estimation is extrapolation (fortune telling) with unknown variables of an unknown function.
9. release as frequently as possible, to actual users, so you can discover the actual requirements.
10. coding is not a social activity.
For any system you build, keep in mind a few general truths:
1) The system isn’t just “the wiki” or “Notion”. The system is composed of both the tools you’re using AND the habits/expectations of the humans who use them. So, this is not just a matter of buying a tool. Its a matter of choosing tools AND designing a process made of humans habits.
2) The system will get messy and unused unless there is regular attention & time allocated to tidying it.
3) If you want people to do something, recognise and incentivize it. If there is a person who habitually sends out concise notes after meetings, make sure that their performance review recognizes that contribution.
4) A habit has three parts: {:situation, :action, :reward}
Example:
Situation: At the end of a retrospective meeting, we have learned of a need for a runbook on how to handle a type of automated alert.
Action: The team lead updates a runbook and tags a junior engineer to review it for missing context.
Reward: The team lead feels satisfaction that they’ve set their team up for future success. They add a bullet point to the notes to use during their next annual review.
Quite excited for this! We have currently been experimenting with using DynamoDB, and managing our own rollups of our incoming data (previously on an RDS, which is not a good choice for this kind of data).
---
I've seen a lot of people complain about pricing, so I thought I'd share a little why we are excited about this:
We have approximately 280 devices out, monitoring production lines, sending aggregated data every 5 seconds, via MQTT to AWS IoT. The average messages published that we see is around ~2 million a day (equipment is often turned off, when not producing). The packet size is very small, and highly compressable, each below 1KB, but let's just make it 1KB.
We then currently funnel this data into Lambda, which processes it, and puts it into DynamoDB and handles rollups. The costs of that whole thing is approximately $20 a day (IoT, DynamoDB, Lambda and X-Ray), with Lambda+DynamoDB making up $17 of that cost.
Finally, our users look at this data, live, on dashboards, usually looking at the last 8 hours of data for a specific device. Let's throw around that there will be 10,000 queries each day, looking at the data of the day (2GB/day / 280devices = 0.007142857 GB/device/day).
---
Now, running the same numbers on the AWS Timestream pricing[0] (daily cost):
From these (very) quick calculations, this means we could lower our cost from ~$20/day to ~$4.5/day. And that's not even taking into account that it removes our need to create/maintain our own custom solution.
I am probably missing some details, but it does look bright!
https://github.com/swyxio/ai-notes/blob/main/README.md#top-a...
and then you can go into the individual modality specific notes for more reading