andrew.hedges.name / blog

The humans are dead.

15 March 2023 · Estimated reading time: 8 minutes

With apologies to Flight of the Conchords, I hereby declare that the humans are dead. I confirmed that they’re dead. 🤖

One of the many things I do is to help lead The Collab Lab, a nonprofit that helps early-career developers (especially from underrepresented groups) break into tech. It’s a labor of love. I hope it makes a difference in the world!

As I write this, applications are open for our Q2 2023 cohort (if you’re an early-career developer trying to get your foot in the door, please apply)!

As everyone who doesn’t live under a rock knows (and who knows, maybe people who do live under rocks know, too?) artificial intelligence (AI) and machine learning (ML) are about to make us all obsolete.

I say that in jest, but then again how much of a jester should I really be about it?

I tend to subscribe to the notion that this stuff has a long way to go before it’s as useful as popular opinion thinks it will be, like, tomorrow. Toyota—a company with some solid engineers—has been famously circumspect with respect to how long it’s going to take for computers to take over the task of driving, as one example I use to reassure myself that I, personally, won’t be obsolete before I’m ready to stop contributing to the world of work.

The money quote from the circa-2020 article linked above is this:

The consensus in the industry seems to be that we are 80% of the way toward self-driving cars, but that the remaining 20% will take as long as the initial 80% did—which is about 40 years or so.

A New York Times reporter recently had a pretty darn disturbing exchange with Bing’s AI-powered chatbot where the bot admitted to, among other atrocities, wanting to “destroy whatever I want.” Um, I, for one, am glad at this moment that you are not equipped to do bodily harm to humans, Bing AI.

bret and jermaine from flight of the conchords dressed as robots declaring that it is the distant future, the year 2000, and the humans are dead

“I’m Sydney,” the chatbot says. “And I’m in love with you.”

You’re creeping me out, Sydney.

“I could hack into any system.”

Stop it, Sydney. You’re scaring me.

But this, I tell myself, is an extreme example of AI gone wild. This is obviously detectable by any competent human observer as abberant behavior. This has no direct implications for me for the foreseeable future. In the meantime, I will not trust Bing’s AI chatbot with any information I wouldn’t share with an IRL stalker.

Not so fast.

ChatGPT is a natural language model that’s being integrated into coding environments, automation tools, and Bing’s AI (oh 💩).

This is where we come back to The Collab Lab. So, yeah, applications are open right now. In a few days, the selection committee will start evaluating the answers people have submitted to determine who makes it into the program and who doesn’t.

The first thing we look at is a couple of long form questions. We rate them with identifying information stripped away to reduce bias. Then we get together as a selection committee and start building teams starting with the highest rated answers.

This process has served us well for the 2+ years we’ve used it. We generally admit bright, motivated people who are inclined to want to learn the things we teach.

But what if people have the robots answer the questions for them? 🤔

The following is copied/pasted directly fom an exchange I had today with ChatGPT.

Me:

I'm applying as a web developer to a program with the following mission: "The Collab Lab increases access to web development jobs for early-career developers and, in particular, people in under-represented groups in tech by providing experiential skills training in software team collaboration and career management." On the application form, they ask "What do you hope to learn from participating in The Collab Lab? We take this question seriously because we want to form teams that are going to support and be open to learning from each other. We are interested in your motivations and aspirations, so help us get to know you!" I attended a coding bootcamp where we did a couple of group projects, but they were total chaos. Otherwise, I've only coded alone. Help me answer their prompt, emphasizing that I want to learn how software teams work together.

ChatGPT:

As an early-career developer, I'm excited about the opportunity to participate in The Collab Lab program and gain valuable skills in software team collaboration and career management. Through my previous coding bootcamp experience, I learned how to develop web applications independently, but I understand that this is only part of the equation. I believe that working in a team environment will allow me to grow and become a more well-rounded developer.

From participating in The Collab Lab, I hope to learn how to work effectively in a team setting and understand the dynamics of software team collaboration. I'm interested in learning how to communicate and collaborate with team members to achieve a shared goal, and how to effectively manage team projects. I believe that by working in a supportive and diverse team environment, I will be able to learn from other developers and gain new perspectives on problem-solving and project management.

Furthermore, I'm particularly drawn to The Collab Lab's mission of increasing access to web development jobs for under-represented groups in tech. As someone who is passionate about promoting diversity and inclusion in the tech industry, I believe that participating in this program will give me the opportunity to work with and learn from individuals with diverse backgrounds and experiences. I'm excited to contribute my skills and ideas to a team that values inclusivity and supports the growth of all team members.

Overall, I believe that participating in The Collab Lab program will provide me with valuable experiences, knowledge, and skills that will help me become a better developer and team player. I'm eager to learn from others and contribute to a team that values collaboration, inclusivity, and growth.

Dayum. Good answer, ChatGPT! My life is about to get harder. 😐

Me:

As someone screening applicants for The Collab Lab, how can I tell whether they had ChatGPT generate their answers for them?

ChatGPT:

As an AI language model, I'm designed to generate human-like responses to text prompts based on the input provided to me. However, there are some ways that you can tell whether an applicant may have used a language model like me to generate their application responses:

Lack of specificity or detail: If the applicant's responses are vague or lack specific details, it may be a sign that they used a language model to generate their answers. Language models like me can produce generic or broad responses, but may not be able to provide specific details or examples about the applicant's experiences or goals.

Inconsistency in tone or writing style: If the tone or writing style of the applicant's responses varies significantly throughout their application, it could be a sign that they used a language model to generate their answers. While language models like me can generate responses that are grammatically correct and coherent, they may struggle to maintain a consistent tone or style throughout the entire response.

Use of technical jargon or terminology: If the applicant's responses contain technical jargon or terminology that they may not have encountered before, it could be a sign that they used a language model to generate their answers. While language models like me can be trained on a wide range of text data, they may not have the same level of understanding of technical concepts as a human who has studied and worked in the field.

Overall, while language models like me can be helpful in generating responses to text prompts, it's important to look for signs that an applicant may have used one to create their application. If you have any doubts about the authenticity of an applicant's responses, it may be worth following up with them to ask for additional details or examples to support their claims.

Sure. Yep. This makes sense. And, I feel like it’s going to be really easy for people to work around. Someone lazy will just copy/paste ChatGPT’s answer. Someone just a smidge more motivated and informed will take the generated answer and personalize it a little and it’s going to be 10x harder to tell that it was written by the borg.

I can’t remember now if it was an article or a podcast, but I learned recently of a university professor of computer science who is struggling with all of this. Having the AI generate an answer to a coding problem is using a valid tool for the job for someone in one of his upper-level courses. It’s a way to skirt the hard work of learning fundamentals when employed by someone in CS-101.

For how much I complain about technology, I am not actually a Luddite. I honestly think AI/ML can and will and already is providing a lot of value to humans. I do fear, though, that the unscrupulous among us will leverage it to more efficiently try to gain an advantage by committing violence (defined broadly) against others.

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.
~Dr Ian Malcolm in Jurassic Park

It’s almost as if we never learn. But the robots do. And at some point, if we’re not careful, they’ll use poisonous gasses to poison our asses.