Machine Learning/Data Sciences/R
Details
Searching for a Job in the Age of Hype!
This month, we’re trying something different with a change of venue. I’m hoping for more audience participation, and I’d love to add your voice to our discussion.
There has been a lot of hype surrounding AI, particularly in machine learning (ML), ChatGPT, and even quantum computing. I’m interested in hearing about your personal experiences with job searching in AI, ML, and ChatGPT. While many blogs discuss this topic, it would be valuable to survey people directly and gather their insights.
Lately, I’ve come across posts claiming that anyone with some ML experience can easily find a job in the field. Or management drinking the AI KoolAid about how you can now replace half the programming staff with AI. However, recent graduates with degrees specializing in machine learning often share their struggles, including having to settle for positions far below their expectations. If anything, I think that publishing papers after your ML MS degree is necessary. You can see that there is a slowdown in ML jobs by the shutdown of many coding bootcamps.
I guess the standard path to ML should be to get a M. Sc. in ML/AI or something related, get a first job in that field, and then job hop chasing the tech bandwagon. The other approach seems to be to develop a showcase project which you can either use to impress a hiring manager or use that as a basis for a startup business, either hoping to sell it or at least live off the revenue stream, if it is successful. A corollary might be to use some AI in current your work project hopefully as a springboard to showing off your talents. * The third option is to become an AI influencer by updating videos of your speculations of the field. Maybe someone will love you enough to lead to a job. Just like the field of health sciences, there is a degree of specialization which happens. The hot fields attract lots of money and the others dry up.
This kind of hype isn’t new. We can recall the internet implosion of 2001. I remember 1992 when DEC launched the Alpha chip—a technical marvel that didn’t sell as expected. Or Arvind’s DataFlow architecture, which was supposed to revolutionize CPU design by addressing the von Neumann bottleneck, yet Motorola lost $50 million (at a time when that was a significant amount of money). And, of course, we’ve been “20 years away” from sustained nuclear fusion for the past 40 years.
The other problem is the astronomical valuations being placed on these AI startups. I understand that many will fail, but I as a taxpayer don't want to be stuck with a bailout of IBM, OpenAI, or Google when they become too big to let fail and a part of the American Infrastructure like the Banks. I say let them fail as well as the Crypto Strategic Reserve.
The other big problem that I have is that I don't want to give up my understanding of the problem and solution to whatever comes out of an AI? Anyone who has sat through a one week large planning committee will never fear AI taking his job. There are so many wrong decisions being made by people unfamiliar with the process.
Remember, to err is human, but to really foul things up takes a computer. I think we are in "The Uncanny Valley" and will be there for quite a while.
NB. The above image has been generated with ChatGPT!
* I remember a funny story about some guy who had the opposite problem with fuzzy logic. His risk adverse manager told him that he doesn't want fuzzy logic in the solution. The engineer just reformulated the solution to remove any references to FL and use a classical definition of the solution.