Don’t Distract New Programmers with OOP

Thomas Gumz sent me a link to a blog entry entitled “Don’t Distract New Programmers with OOP“. Having just wrapped up one year of teaching “Intro to Programming and Problem Solving” to students at Clark College, I could not agree more. One of the core outcomes of my class is centered around functional decomposition – how to break down a problem into smaller, simpler parts.

When I get asked “What’s a good first programming language to teach my [son / daughter / other-person-with-no-programming-experience]?” my answer has been the same for the last 5+ years: Python.

I get this same question almost on a daily basis from so many people. Admittedly, before I started teaching the class I questioned the use of Python for new programmers. Well, guess what? It’s the perfect language and I have the results to prove it.

Did we cover object oriented programming in the class – yes, but not to the level that most would expect. We did just enough for students to wrap their heads around the concept. In fact, one student tried to use OOP for their final project and had a heck of a time. In fact this student was pushing for more OOP content and after the class concluded they admitted that OOP was much harder then they expected it to be.

The shift from procedural to OO brings with it a shift from thinking about problems and solutions to thinking about architecture. That’s easy to see just by comparing a procedural Python program with an object-oriented one. The latter is almost always longer, full of extra interface and indentation and annotations. The temptation is to start moving trivial bits of code into classes and adding all these little methods and anticipating methods that aren’t needed yet but might be someday.

Be sure and read the blog entry as I think that you will agree with avoiding OOP in an introductory programming class. If you are interested in learning more about pursuing a programming career drop me an email as I would love to help.

You can read what others are saying about this article on Yacker News.

More >

About these ads

Comments

  1. That was indeed fascinating… particularly the “sequel”. What I consider missing from that perspective is a clear guideline for where OOP does make sense. In my opinion, there are two key criteria:

    1. The code is referring to a “real world” object.
    2. State serialization is important.

    If either is true, I find OOP actually simplifies both variable management and algorithm design and comprehension, rather than making either more complex.

    Having now been rather immersed for several years in the MVC mindset, I’ve gradually developed a theory, of sorts, to explain why the leap from procedural to OOP is typically difficult (and why I agree with at least the title of the post, if not necessarily every nuance of the argument): the core of all software is its behavior. Data is evidence that something exists or occurred, and almost all modern software needs some sort of user interface. But its behavior (“controller”, in MVC-speak), is arguably what really matters. Without that business logic, an app might as well just be a spreadsheet or “Word doc”, a simple UI to let the user edit the data directly and be done with it.

    It seems (both from my own experience and my observation of others) that, for new programmers, behavior just seems more instinctive to define procedurally, so early forays into OOP do tend to result in too many classes for classes’ sake. It’s only when code is actually describing a real object that OOP actually starts to make sense — not just in the “it’s a good idea” sense of that phrase, but also in the “hey, I get it now” sense. As a result, I feel that a good entry point for OOP is in the data (“model”) and UI (“view”) layers, leaving the behavior procedural for a while. Consider the notion of a row in a table: whether that refers to a row in a database table or a row in an HTML table, if what it represents is everything we “know” about an employee, it makes sense that there might be an Employee object that encapsulates the current state of the subset of characteristics about that real, flesh-and-blood, person that we are storing or displaying. But if a button alters some of those characteristics, just leave the code that executes that alteration procedural…

    …until we have a mature enough understanding — both of the art of programming and of the nature of the needs our software is intended to meet — to identify which behavior in our code maps to behavior of the real thing our object code would represent. If there’s something an employee would know how to do, and behavior in our code is analogous to the person doing it (whether we’re doing it for them or simply logging that they did), perhaps it makes sense for that code to move to a method of the Employee object. Now, instead of our procedure doing something to, or for, an employee, that procedure “tells” the Employee to do it. This is also where class hierarchies start to make sense: some things only a supervisor knows how — or is permitted — to do, but a supervisor can still do what an employee can do… because all supervisors are still employees. So are VPs and executives. If each layer defines new characteristics, but also new behaviors, you’re writing far less code than you would if you kept all of this logic procedural, because the very nature of each object includes its own “intelligence”, and retains the knowledge it already had each time it is “promoted”.

    Gradually this premise can be extended to the inanimate, and eventually even the abstract, but I concur that this notion is difficult to wrap one’s head around in the early days of learning basic algorithm design. However, if a dozen or more years go by and we’re still structuring (and conceptualizing) all of our code procedurally, it’s almost guaranteed that we’re working harder than we have to… and producing results that are less performant, less scalable, less manageable, and less usable.

  2. Dan Sickles says:

    Reminds me of the Carmack quote:

    “Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.” – John Carmack

    Tim is right, real world object simulation was the intended domain of the first OO language which was named oddly enough: Simula.

    For state serialization, Functional Programming is in resurgence precisely because complex state serialization is so difficult, even within an OO paradigm. It is becoming common to combine Functional and OO. Popular Javascript frameworks, Clojure, Scala and Java 8 are good examples.

    But I would also start with Python which happens to be the primary language I use in my day job.

    After teaching general python familiarity with imperative solutions to many common problems, I would hammer on these points:

    -Functions: pure vs side effects
    -Functions are values that can be passed to and returned by other functions
    -Immutability: why a tuple is not just an immutable list and when to use one
    -List comprehension / Generators
    -And only then, introduce classes

    if frameworks are in scope:
    -a game framework (~pygame)
    -a simple web framework (more like CherryPy than Django)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,666 other followers

%d bloggers like this: