Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

12 Bonehead Misconceptions of Computer Science Professors

The poster-child for what’s wrong with postsecondary education is the Computer Science program. Despite the enormous need for competent programmers, database administrators, systems administrators, IT specialists and a host of other technical professionals, computer science programs seem to explicitly ignore the professional skills of which western society has growing deficiency and proceed with materials and teaching styles that are outdated, ineffective, useless and just plain wrong. This is due to the absurd misconceptions held by computer science faculty members across many universities.

I have personally met computer science professors who believe each of the following things. I make no claims as to how widespread these beliefs are; you can judge that for yourself.

1. Java is a good first teaching language

I don’t know how many computer science programs start teaching programming using Java, but there are more than a few, and that’s too many. When you’re going over variables, loops and conditionals, the object-oriented overhead of a Language like java is unnecessary and confusing. Inquisitive students can’t just memorize things (i.e. public static void main (String args[])) without demanding to know what it means and why it’s there.

2. Machine language is “basic”

Comp Sci people seem to be terribly confused about what ‘basic’ means. When one learns to drive a car, starting the car, making a right turn, a left turn, parking, etc. is basic. Building a parallel gas-electric hybrid engine is not basic. Driving a car is more basic than building one because the latter requires significantly more expert knowledge than the former. In the same way, using a simple scripting language requires less depth of understanding that writing in machine language; therefore, computer science education should start with higher level languages and proceed to lower level ones, not vice versa.

3. You should write code on paper before you write it on a computer

Writing code by hand is stupid. It is entirely inconsistent with the interactive and iterative design process that comes naturally to hackers and painters alike. Professional software developers make extensive use of API documentation, reference guides, forum discussions, etc. to make troubleshoot problems and make their code more efficient and effective. Writing code by hand tests your ability to write trivially simple software without making errors. Real programmers must be capable of making complex software and detecting their errors with a variety of automated tools. Teaching or testing coding using pencil and paper is inconsistent with both the natural mode of human action and the practical realities of software development.

4. Lectures are an effective method of teaching programming

Programming is like algebra. You can’t learn how to write code by watching someone write code on a blackboard or listening to elaborate explanations from professors. You can’t learn math from watching someone do math. You learn to do things by doing them.

5. Algorithm design is learned by reading existing algorithms

Designing algorithms is about finding innovative solutions to difficult problems. Algorithm design courses are about studying existing solutions to rather simple problems. Learning how a particular problem can be solves provides approximately zero insight into how to solve problems you’ve never encountered before.

6. You can just ‘pick up’ prolog in a week for a course

There’s this crazy belief among Comp Sci. faculty that all languages are basically the same, so after learning the principles behind languages you can use whatever. This is bullshit. This is like claiming that since someone studied Spannish grammar in grade school, they can speak Spanish fluently, in any of Spanish, Mexican or Columbian accents. The leap between structured and object-oriented programming is huge, and it pales in comparison to the leap between object-oriented languages and declarative languages.

7. Exams measure understanding of programming

Teams of professional programmers spends months and years building intricate software systems in response to poorly-understood, ill-defined and changing problems. To accomplish this, they employ API documentation, online tutorials and forum discussions, team problem-solving sessions, reference books and an infinite number of phone-a-friend lifelines. Exams test your ability to write simple code to solve a trivial, well-defined static problems, without consulting and references. One is about resourcefulness, the other about memory. Exams test the wrong thing.

8. GUI’s are not an important aspect of learning to code

At the university where I did my undergrad, it was easy to finish a B.Sc. in computer science without ever building a graphical interface. While I agree that many software projects do not have graphical components (e.g., developer APIs), to marginalize GUIs as some kind of specialty endeavor is short-bus crazy!

9. Programming Requires Calculus

I have been told that development involving sophisticated work with graphics and animation involves calculus. Outside of this particular subfield, however, I haven’t seen much calculus in software development. Certainly I’ve seen a lot more GUI development than graphics.

10. Linux will rapidly overtake Windows among consumers

Comp. Sci. profs have been saying this for years. Hasn’t happened. And it’s not going to happen until Ubuntu and company take the dicking around out of computing the way Apple has.

11. LaTeX will overtake WYSIWYG text editors because LaTeX gives you more control

Yes, believe it or not, a computer science prof said this during one of my classes in undergrad. It goes directly to a deeper misunderstanding among Comp. Sci. academics that power and control are the primary factors driving adoption. They’re not. Simplicity and ease of use are far more important.

12. You can buy gates at RadioShack

The same idiot who thought LaTeX was the future also told his class to go buy gates (the things transistors are made of) at RadioShack and play with them to see how they work. Again, this evidences how completely out of touch some of these people are. Gates are microscopic. You can’t go buy them at an electronics store.

Update (25MAR2011): As so many helpful readers have pointed out, 1) gates are made of transistors, not the other way around, and you can now buy gates at Radio Shack online. However, the prof in question told me to go buy gates at a physical Radio Shack store in 2001, and they had no such thing. I don’t know what I was thinking when I wrote “the things transistors are made of.”

Conclusion

I have long argued that society needs a professional certification for software developers and that universities need undergraduate programs dedicated to training people for these certifications. It’s worked for accounting, engineering and medicine. There’s no reason it can’t work for software development. One of the primary barriers to this sort of progress is the raging incompetence of academics in computer science, computer engineering, management information systems and related disciplines.

Have one or a few to add? Comment away.

Related Posts
Why on Earth do Business Schools Teach Microsoft Access?
Abolish Universities?
Nine Reasons why Bad Grades Don’t Mean Squat



This post first appeared on The War On Bullshit, please read the originial post: here

Share the post

12 Bonehead Misconceptions of Computer Science Professors

×

Subscribe to The War On Bullshit

Get updates delivered right to your inbox!

Thank you for your subscription

×