izard: (Default)
[personal profile] izard
Back in university, my friends and I had a trick when it came to exams in advanced physics and math courses: we’d try to sit exams with a postdoc or, better yet, a PhD student rather than a seasoned professor. Professors, with decades of experience, could gauge a student’s depth of understanding after just a few questions, accurately assigning grades. PhD students, however, lacked this refined intuition and often gave higher grades if you could converse confidently about basic concepts and demonstrate a good grasp of basic terminology. They usually tended to assume the best, consciously or subconsciously. The real test wasn’t the material itself, but skilfully steering the exam into a dialogue where you could shine.

So, why do I find myself reflecting on this old trick now? I’m reminded of this experience whenever I think about the grueling rounds of technical interviews at big tech companies. If you’ve ever gone through this process, you know how much it feels like a series of university exams—only now, the subjects are coding challenges and algorithms instead of advanced physics. As an interviewer, I often fell into the same trap as the PhD students—giving positive feedback to candidates who had good communication skills and could solve problems with the guidance I was often too eager to provide. The dynamic often felt more like a two-way conversation than a true assessment of problem-solving skills.

Could this be why big tech firms rely on multiple interview rounds? Perhaps it’s an attempt to compensate for the lack of that "professor" in the room—someone who can effortlessly see through a candidate's responses and provide an objective judgment based on decades of experience. I don't think it’s necessarily a bad thing - if my prospective colleague can solve interview problems with some guidance in a stressful interview setting, while maintaining a casual conversation, then I am confident that he can do the same in work setting and proving they can work well in a team.

Date: 2024-10-20 04:16 pm (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> I literally stopped writing code 2 years ago (I am using local models I fine-tune for me, not ChatGPT)

Do you mean that you do not even have to correct the code that your local AI models produce for you?
Does that auto-generated code work as is?

From my experience, AI-generated code is good as an initial prototype, but not as a final product.

Date: 2024-10-21 12:31 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> editing and writing are two different skills

If an engineer is editing code, but does not write new code - do you call it "can't code disease"?

Date: 2024-10-21 07:29 am (UTC)
dennisgorelik: 2020-06-13 in my home office (Default)
From: [personal profile] dennisgorelik
> That is what interviewer will see

What is "that" the the interviewer will see?

> if they give me pen and paper or a whiteboard during an interview

Why wouldn't you have access to your code generator during the interview?
The interview environment should be similar to your work environment.

Profile

izard: (Default)
izard

June 2025

S M T W T F S
1234567
891011121314
15161718192021
22 23242526 2728
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 1st, 2025 08:21 am
Powered by Dreamwidth Studios