izard: (Default)
[personal profile] izard
Back in university, my friends and I had a trick when it came to exams in advanced physics and math courses: we’d try to sit exams with a postdoc or, better yet, a PhD student rather than a seasoned professor. Professors, with decades of experience, could gauge a student’s depth of understanding after just a few questions, accurately assigning grades. PhD students, however, lacked this refined intuition and often gave higher grades if you could converse confidently about basic concepts and demonstrate a good grasp of basic terminology. They usually tended to assume the best, consciously or subconsciously. The real test wasn’t the material itself, but skilfully steering the exam into a dialogue where you could shine.

So, why do I find myself reflecting on this old trick now? I’m reminded of this experience whenever I think about the grueling rounds of technical interviews at big tech companies. If you’ve ever gone through this process, you know how much it feels like a series of university exams—only now, the subjects are coding challenges and algorithms instead of advanced physics. As an interviewer, I often fell into the same trap as the PhD students—giving positive feedback to candidates who had good communication skills and could solve problems with the guidance I was often too eager to provide. The dynamic often felt more like a two-way conversation than a true assessment of problem-solving skills.

Could this be why big tech firms rely on multiple interview rounds? Perhaps it’s an attempt to compensate for the lack of that "professor" in the room—someone who can effortlessly see through a candidate's responses and provide an objective judgment based on decades of experience. I don't think it’s necessarily a bad thing - if my prospective colleague can solve interview problems with some guidance in a stressful interview setting, while maintaining a casual conversation, then I am confident that he can do the same in work setting and proving they can work well in a team.

Date: 2024-10-19 08:24 am (UTC)
juan_gandhi: (Default)
From: [personal profile] juan_gandhi
It's not fair, right? I feel like I'm that professor. I pay more attention to candidates with communication problems (like, one guy was stuttering, especially because he was nervous). As to smooth talking bramins, they are easily checked by going deeper and deeper into the problem. Some do it easily, and you see that they understand; others continue their generic speech... so I change the topic to something easier, just to make sure the guy does not get upset. But an Indian probably understands my attitude anyway, they are smarter in communications than us, mortals.

Date: 2024-10-19 06:25 pm (UTC)
juan_gandhi: (Default)
From: [personal profile] juan_gandhi
Of course perf is not my area of expertise. I like the stories, but...

When interviewing, I was interested in two things: ability to write code and an open mind. Does not matter, CS prof or not, I interviewed way before that. over 100 candidates while at Google.

See, if you ask whether they can write a regex to parse an arithmetic expression, the reactions vary. One just starts laughing (a Polish guy), another says: ok, can do it in Perl (is not it a good answer?). And you can definitely distinguish a clueless one.

My other question was about generating a stream of words. For many, the idea that your function produces an infinite number of results is totally stunning. They just can't grasp the idea. But of course they may slap together a web server which never ends (on its own). Just the general idea of going outside of the standard realm of programming stuns some.

But at Google most candidates did well. With some funny exceptions, mostly CCD ("can't code disease"). Coding is not for everyone.

Date: 2024-10-20 01:25 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> Now you can ask it to write short code parts and then just plug them in after minor editing

Are you implying that ChatGPT somehow cured "Can't code disease"?

ChatGPT helps coding, but ChatGPT does not cure CCD, because in order to code you need to be able to read code, and ChatGPT does not help with the ability to read code much.

Date: 2024-10-20 04:16 pm (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> I literally stopped writing code 2 years ago (I am using local models I fine-tune for me, not ChatGPT)

Do you mean that you do not even have to correct the code that your local AI models produce for you?
Does that auto-generated code work as is?

From my experience, AI-generated code is good as an initial prototype, but not as a final product.

Date: 2024-10-21 12:31 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> editing and writing are two different skills

If an engineer is editing code, but does not write new code - do you call it "can't code disease"?

Date: 2024-10-21 07:29 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> That is what interviewer will see

What is "that" the the interviewer will see?

> if they give me pen and paper or a whiteboard during an interview

Why wouldn't you have access to your code generator during the interview?
The interview environment should be similar to your work environment.

Profile

izard: (Default)
izard

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22 23242526 2728
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 1st, 2025 03:28 am
Powered by Dreamwidth Studios