Post Amazon Plans

My time at Amazon is a valuable experience that I cherish greatly. As my contract reached it’s peak limit of 11 months, I had prepared to say goodbye to my team and stakeholders.

Despite it being near holiday break and on a Friday, there were quite a lot of attendees. The number of members who attended my farewell gathering surprised me. I hadn’t expected much since there’s major releases coming up and we’re understaffed. I had team members from horizontal and vertical teams join in to say farewell which was very nice of them. My quality of work and lasting impression as a technical writer established a strong relationship and bonds across teams.

I felt loved as we spoke about the projects I’ve undertaken and the amount of sheer growth. During the virtual farewell gathering, I recieved a card from the organization! It made my day reading through and reminiscing over how it’s only been 11 months, yet I’ve worked with on so many project backed by brilliant minds across STEM fields.

What’s next

During the farewell, after we gave thanks, I had a rare opportunity to get advice from seasoned professionals and I wasn’t going to waste it. The org I worked for employed PhD professionals with many science awards under their belt. I got advice on what to pursue as someone interested in Machine Learning and facsinating “future of tech” roles, like prompt engineering.

Career with LLMs

Prompt engineering sounds great, there’s a lot of great resources out there, I recommend reading articles by Chip Huyen. However, I feel it’s starting to dive into the “murcky” side of engineering. From my experience working with Data Engineers, the same development cycle is applied towards prompt engineering, where they focus on cleaning data and output analysis, leaving less time for the enjoyable parts of prompting.

In a way it’s similar to how companies overhired data scientists to analyze data, only to find out they needed data engineers to first clean the data for analysis. On a small scale, this isn’t too bad, but couple in commercal use and you’re dealing with big data across multiple distributed storages and security levels of data clearance to access.

This increasing legal barrier has put me off the traditional idea of a prompt engineer. I have a strong gut feeling towards pioneering a new field, prompt writing. If the inputs aren’t good and the training data is lacking quality, it doesn’t matter if you clean all the bias, you’ll still end up with a rotten output. Prompt writing know how to shorten context lengths by making contextual prompts and among other techniques.

I’ve played with LLMs and built roadmaps, replaced google search for some topics, and written DnD (world-building) descriptions. But lately the LLMs seem to be getting worse so I looked into why. It’s far too general now to become an expert for classical problems.

Due to this, I don’t see a career with LLMs until someone improves the expert capabilities. I’ve seen some good domain-specific models and believe transformer architecture is the way to go. I’m more interested in having some kind of version control to retrace the prompt context. But the downside here is that it eats up the context tokens and results in one or few-shot models that cannot hold extended conversations. What would solve all this would be a series of prompts parsed and written to generate Git for text. RANT OVER.

Post-Graduate Studies

One of the biggest caveats working in Core AI at Amazon is that their bar is extremely high being a Science organization. Most full-time candidates have a PhD or masters with incredible awards for scientific contribution. I felt junior when I looked at their qualifications, and channeled that energy into self-improvement.

Now is still a great time to apply for post-graduate education, and with the farewell I was able to get some amazing work references from PhD graduates in ML/AI/Economics from CMU, Berkeley, and other prestigous colleges.

The next question would be what I would study and where should I go. To be continued…