izard: (Default)
[personal profile] izard
Back in university, my friends and I had a trick when it came to exams in advanced physics and math courses: we’d try to sit exams with a postdoc or, better yet, a PhD student rather than a seasoned professor. Professors, with decades of experience, could gauge a student’s depth of understanding after just a few questions, accurately assigning grades. PhD students, however, lacked this refined intuition and often gave higher grades if you could converse confidently about basic concepts and demonstrate a good grasp of basic terminology. They usually tended to assume the best, consciously or subconsciously. The real test wasn’t the material itself, but skilfully steering the exam into a dialogue where you could shine.

So, why do I find myself reflecting on this old trick now? I’m reminded of this experience whenever I think about the grueling rounds of technical interviews at big tech companies. If you’ve ever gone through this process, you know how much it feels like a series of university exams—only now, the subjects are coding challenges and algorithms instead of advanced physics. As an interviewer, I often fell into the same trap as the PhD students—giving positive feedback to candidates who had good communication skills and could solve problems with the guidance I was often too eager to provide. The dynamic often felt more like a two-way conversation than a true assessment of problem-solving skills.

Could this be why big tech firms rely on multiple interview rounds? Perhaps it’s an attempt to compensate for the lack of that "professor" in the room—someone who can effortlessly see through a candidate's responses and provide an objective judgment based on decades of experience. I don't think it’s necessarily a bad thing - if my prospective colleague can solve interview problems with some guidance in a stressful interview setting, while maintaining a casual conversation, then I am confident that he can do the same in work setting and proving they can work well in a team.

Date: 2024-10-20 01:25 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> Now you can ask it to write short code parts and then just plug them in after minor editing

Are you implying that ChatGPT somehow cured "Can't code disease"?

ChatGPT helps coding, but ChatGPT does not cure CCD, because in order to code you need to be able to read code, and ChatGPT does not help with the ability to read code much.

Date: 2024-10-20 04:16 pm (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> I literally stopped writing code 2 years ago (I am using local models I fine-tune for me, not ChatGPT)

Do you mean that you do not even have to correct the code that your local AI models produce for you?
Does that auto-generated code work as is?

From my experience, AI-generated code is good as an initial prototype, but not as a final product.

Date: 2024-10-21 12:31 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> editing and writing are two different skills

If an engineer is editing code, but does not write new code - do you call it "can't code disease"?

Date: 2024-10-21 07:29 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> That is what interviewer will see

What is "that" the the interviewer will see?

> if they give me pen and paper or a whiteboard during an interview

Why wouldn't you have access to your code generator during the interview?
The interview environment should be similar to your work environment.

Profile

izard: (Default)
izard

July 2025

S M T W T F S
  12345
67 8 91011 12
13141516171819
20212223242526
2728293031  

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 18th, 2025 05:42 am
Powered by Dreamwidth Studios