Exams and big tech interviews
Oct. 19th, 2024 10:12 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Back in university, my friends and I had a trick when it came to exams in advanced physics and math courses: we’d try to sit exams with a postdoc or, better yet, a PhD student rather than a seasoned professor. Professors, with decades of experience, could gauge a student’s depth of understanding after just a few questions, accurately assigning grades. PhD students, however, lacked this refined intuition and often gave higher grades if you could converse confidently about basic concepts and demonstrate a good grasp of basic terminology. They usually tended to assume the best, consciously or subconsciously. The real test wasn’t the material itself, but skilfully steering the exam into a dialogue where you could shine.
So, why do I find myself reflecting on this old trick now? I’m reminded of this experience whenever I think about the grueling rounds of technical interviews at big tech companies. If you’ve ever gone through this process, you know how much it feels like a series of university exams—only now, the subjects are coding challenges and algorithms instead of advanced physics. As an interviewer, I often fell into the same trap as the PhD students—giving positive feedback to candidates who had good communication skills and could solve problems with the guidance I was often too eager to provide. The dynamic often felt more like a two-way conversation than a true assessment of problem-solving skills.
Could this be why big tech firms rely on multiple interview rounds? Perhaps it’s an attempt to compensate for the lack of that "professor" in the room—someone who can effortlessly see through a candidate's responses and provide an objective judgment based on decades of experience. I don't think it’s necessarily a bad thing - if my prospective colleague can solve interview problems with some guidance in a stressful interview setting, while maintaining a casual conversation, then I am confident that he can do the same in work setting and proving they can work well in a team.
So, why do I find myself reflecting on this old trick now? I’m reminded of this experience whenever I think about the grueling rounds of technical interviews at big tech companies. If you’ve ever gone through this process, you know how much it feels like a series of university exams—only now, the subjects are coding challenges and algorithms instead of advanced physics. As an interviewer, I often fell into the same trap as the PhD students—giving positive feedback to candidates who had good communication skills and could solve problems with the guidance I was often too eager to provide. The dynamic often felt more like a two-way conversation than a true assessment of problem-solving skills.
Could this be why big tech firms rely on multiple interview rounds? Perhaps it’s an attempt to compensate for the lack of that "professor" in the room—someone who can effortlessly see through a candidate's responses and provide an objective judgment based on decades of experience. I don't think it’s necessarily a bad thing - if my prospective colleague can solve interview problems with some guidance in a stressful interview setting, while maintaining a casual conversation, then I am confident that he can do the same in work setting and proving they can work well in a team.
no subject
Date: 2024-10-19 08:24 am (UTC)no subject
Date: 2024-10-19 05:20 pm (UTC)When I was interviewing at Apple for performance engineering positions, I had to give the same work-related test problems to everyone. But I was still giving the same positive feedback to candidates who easily solve those with minimal guidance, and those who needed a lot of guidance as long as they could follow it. Still there were ~70% of failures or near failures - those who could not follow the guidance just because they don't know basic terminology or don't have even general understanding of how CPUs and GPUs work under the hood.
no subject
Date: 2024-10-19 06:25 pm (UTC)When interviewing, I was interested in two things: ability to write code and an open mind. Does not matter, CS prof or not, I interviewed way before that. over 100 candidates while at Google.
See, if you ask whether they can write a regex to parse an arithmetic expression, the reactions vary. One just starts laughing (a Polish guy), another says: ok, can do it in Perl (is not it a good answer?). And you can definitely distinguish a clueless one.
My other question was about generating a stream of words. For many, the idea that your function produces an infinite number of results is totally stunning. They just can't grasp the idea. But of course they may slap together a web server which never ends (on its own). Just the general idea of going outside of the standard realm of programming stuns some.
But at Google most candidates did well. With some funny exceptions, mostly CCD ("can't code disease"). Coding is not for everyone.
no subject
Date: 2024-10-19 08:28 pm (UTC)no subject
Date: 2024-10-20 01:25 am (UTC)Are you implying that ChatGPT somehow cured "Can't code disease"?
ChatGPT helps coding, but ChatGPT does not cure CCD, because in order to code you need to be able to read code, and ChatGPT does not help with the ability to read code much.
no subject
Date: 2024-10-20 12:29 pm (UTC)no subject
Date: 2024-10-20 04:16 pm (UTC)Do you mean that you do not even have to correct the code that your local AI models produce for you?
Does that auto-generated code work as is?
From my experience, AI-generated code is good as an initial prototype, but not as a final product.
no subject
Date: 2024-10-20 05:06 pm (UTC)>From my experience, AI-generated code is good as an initial prototype, but not as a final product.
Same here, but editing and writing are two different skills.
no subject
Date: 2024-10-21 12:31 am (UTC)If an engineer is editing code, but does not write new code - do you call it "can't code disease"?
no subject
Date: 2024-10-21 05:49 am (UTC)no subject
Date: 2024-10-21 07:29 am (UTC)What is "that" the the interviewer will see?
> if they give me pen and paper or a whiteboard during an interview
Why wouldn't you have access to your code generator during the interview?
The interview environment should be similar to your work environment.