Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pissed off about functional programming (perlmonks.org)
97 points by gnosis on March 26, 2011 | hide | past | favorite | 37 comments


This seems confused.

"Lazy evaluation brings time into the picture"

Lazy evaluation changes the programming language semantics; but time, per se, is not "brought into the picture" in any visible way - if the specific time of evaluation were visible, the program wouldn't be referentially transparent.

"In point of fact, the simultaneity constraints inherent in lambda calculus have been shown to make it unsuitable for issues involving concurrency (like multithreading)"

Functional programming is particularly suited to parallelization, but concurrency demands a different approach, but it depends on the application domain.


You might want to put "(2005)" in the title.

I'll pick a nit. He writes, "According to Church's thesis, all programming languages are computationally equivalent. They all do the same things, and only the syntactic sugar is different. In this case, the syntactic sugar is so different that people can end up using variable assignment without knowing they're doing it ....."

I think this is incorrect. Church's Thesis (and correct me if I'm wrong) says that all programming languages are "functionally" equivalent; as in, if you look at a program in a language as a black box, you can replace the contents of the box by another program in another language, and the results would be identical. Now, whether one language uses variable assignment and other doesn't, is irrelevant; and it doesn't flow from the claim.


Corrected in a followup comment:

http://perlmonks.org/?node_id=451236


I get the feeling the author's total knowledge of functional programming comes from reading -1 comments on Slashdot.

  > It's theoretically possible that the 'f()' or 'x' might
  > change between the lazy evaluation of step one and the
  > lazy evaluation of step three million. Trying to prevent
  > that is what we programmers call a 'hard' problem.
I don't know of any functional language that allows values to change during evaluation.

  > Trying to run functional logic in two or more
  > simultaneous threads can raise serious problems, for
  > instance.
The whole point of using functional programming in multithreaded applications is that it doesn't raise such problems. The data are immutable; one thread doesn't affect the other, unless the programmer explicitly uses stateful constructs like pointers.

  > Myth #2 - Functional programming is 'different from' imperative programming.
  > [...]
  > On the other hand, 'imperative' is also used as a
  > contrast to 'functional', on the basis of really poorly
  > defined terms. The most common version runs something
  > like, "imperative languages use assignment, functional
  > languages don't".. a concept I'll jump up and down
  > upon later.
I've never heard this definition. Ever. It simply doesn't make any sense; of course functional languages use assignment.

  > No, functional programming is not some kind of magic
  > pixie dust you can sprinkle on a computer and banish
  > all problems associated with the business of
  > programming. It's a mindset-altering philosophy and/or
  > worldview, not a completely novel theory of
  > computation. Accept that. Be happy about it. At least
  > friggin' COPE.
This paragraph is completely ridiculous. Functional programming:

1. Is not magic problem-solving pixie dust (obviously).

2. Is not a philosophy, mindset-altering or otherwise.

3. Is a novel theory of computation.

Whether a Turing machine and the lambda calculus are equivalent in power is irrelevant; they are, undeniably, different. Otherwise, there would be no point in using one over the other.

  > Myth #3 - Functional programming is referentially transparent
This entire section deals with Perl, which is not a functional language (as it is not based on the lambda calculus). I don't know enough about Perl to state whether the authors assertions are correct, but regardless, they have nothing to do with functional programming.

The second half of section #3 claims referential transparency is impossible because it requires two values with some external equivalent meaning, such as "three" and "3", to be considered equivalent by the program. At this point I'm wondering whether the entire article is some elaborate prank.

  > Myth #4 - Functional programming doesn't support variable assignment
  > [...]
  > According to Church's thesis, all programming languages
  > are computationally equivalent. They all do the same
  > things, and only the syntactic sugar is different.
Section #4 is already off to a bad start...in fact, reading further, it's just more of the same. Abject confusion, combined with a belief that Perl can be used to demonstrate functional concepts, yielding incomprehensible conclusions.


I'm decently sure "functional programming has been called programming without assignments" comes from Ravi Sethi's programming languages book.


In that case, I'd be very interested to know what Dr. Sethi meant by it. Even the simplest Haskell program requires one assignment (to `main`), so the idea of programming without them seems like an exercise in futility.


The "=" in the declaration of Haskell functions isn't considered an assignment, but a binding of a name to a definition. Usually when you have assignments, you are permitting a name to be rebound to a different value than it had originally. In Haskell you cannot rebind a name. Essentially "=" in Haskell is more like the operator's use in defining mathematical functions.

Due to how scoping and how do-comprehensions actually work, you can do some things that look like assignment, but are really creating a new binding in a nested scope. But if you try to bind a name twice in the same scope, the program fails to compile.


I intuitively knew this was the case, but wanted to do some research before posting an answer. The distinction between naming and assignment can be a little subtle. And you beat me to the punch.

But I thought this page was interesting:

http://www.haskell.org/haskellwiki/Pronunciation

'=' is pronounced as 'is'.

So you'd say something like 'main is the function that evaluates the expression...' Traditional assignment would be something more like 'the value stored in the container main now becomes the expression...'



What definition does "assignment" have other than "a binding of a name to a definition"?

Keep in mind that there are at least three popular definitions of the = operator:

1. In C or C++, = handles assignment and mutation (no re-assignment):

  int x = 1; /* assignment */
  x = 2; /* mutation */
2. In Python, = handles assignment and re-assignment (no mutation):

  x = [1] # assignment
  x = [2] # re-assignment
3. In Haskell, = handles assignment (no re-assignment or mutation)

  let x = 1 in
Just because Haskell's = operator does not behave like C's = operator is no reason to claim Haskell does not support assignment. In fact, that is the only thing Haskell's = operator supports.


This is now a semantic discussion - which is not necessarily a bad thing, we should just know that up front.

To me, the statement "programming without assignment" implies programming without making multiple assignments to the same name - mutation or re-assignment. I don't know if the programming language community has agreed-upon semantics for the word "assignment," but to me, it implies re-assignment and mutation. What you call "assignment" I think of as binding - that is, the lvalue gets bound to the rvalue.


The technical terms are 'variable assignment' and 'single assignment'. 'Assigment' is generally considered to mean the former unless it's qualified.

From a pedantic perspective, you can't really disagree with the statement 'haskell supports single assignment.' But equally as pedantic, most actual text reads 'FP doesn't support variable assignment', including the original article above.

I don't have a copy of the Ravi Sethi book to see what the exact text of his exact statement was.

Now that I've seen this image on wikipedia:

https://secure.wikimedia.org/wikipedia/en/wiki/File:Fp_no_de...

I'm really tempted to waste some more money at Cafe Press.


To me, the argument to convince oneself that Haskell's = "assigmnent" is not the same beast as C's and Python's is the following: the binding it defines is also valid in code before it.


I think I didn't get this :( Care to explain?


    main=print n
    n=42
In C or Python, this would print 0 or undef or some random contents. In Haskell, it prints 42. The "assignment" reaches back to instructions before itself. Which ought to hint it's not really an assignment in the (my) usual sense of the word.


In Haskell, I can:

    x = 0:y
    y = 1:x
    main = print $ take 10 y
Producing output:

    [1,0,1,0,1,0,1,0,1,0]


I think the other replies have answered your question well enough, but your definitions for = are a litle off. The cases 1 and 2 are really the same thing. In Python the names are references to objects. Assignment in Python is like using pointers in C. You are doing mutation, what you mutate is the value of a pointer (and probably some other oddsand ends for garbage collection).


I was going to say the same thing - that cases 1 and 2 are really the same thing - when I realized that's not quite accurate. In his terms, mutation is a special case of re-assignment. That is, re-assignment allows re-binding a name to an entirely different value which may have a different type. Mutation is then re-assignment where the types happen to be the same.


Haskell does contain no assignment in the traditional sense. = in Haskell defines a name to stand for a value, which can't ever change.


He means "destructive assignment".


I'd like to propose a new rule similar to Goodwin's--- any response/reply that begins with a personal attack should be ignored from that point forward--- except I'm not sure how I avoid being self referential...


To a certain extent sound thinking people already follow this rule, though I don't think it's yet been named as such.


I was sure that things like 'let' in ML are not assignments - but I looked it up in wikipedia: http://en.wikipedia.org/wiki/Assignment_(computer_science)#S... Apparently it is assignment - but a special case of it.


Usually that is considered a let-binding and not assignment. But I guess either is ok


Comments link to a good LtU discussion that counter his claims and confusions: http://lambda-the-ultimate.org/node/678


   > On one hand, 'imperative' means 'a list of operations
   > that produce a sequence of results'. We contrast that 
   > with 'declarative programming', which means 'a list of 
   > results which are calculated (somehow) each step of the 
   > way'. Mathematical proofs and SQL queries are 
   > declarative. Most of the things we think of as 
   > programming languages are 'imperative' by this meaning, 
   > though, including most functional languages. No matter 
   > how you slice it, Haskell is not declarative.
Either I'm confused or the author is.

SQL is declarative, sure, but I don't think mathematical proofs are. Sometimes you do declarativeish things like, "Let x be the solution, known to exist, to this unsolvable problem." But I think the structure is fundamentally very imperative; ignoring the syntactic sugar, formal mathematical logic is a sequence of logical transformations that leads from hypothesis to conclusion.

I can chalk that up to confusion or inexperience on the author's part, but the bit about Haskell really confuses me.

I'm still quite new to functional programming -- just emerging from the "how the heck do I do that without for loops?" phase -- and the experience really has been one of learning to program all over again. As in the CS 101 days of yore, I have to spend a lot of time trying to figure out how to accomplish simple things, dealing with alien logical consequences and constructs.

Now, Scala lets me do things either way, and I like to think I'm getting to the point where that's an advantage rather than a crutch. But looking back over the last little while, I can't imagine how you could possibly slice Haskell in such a way that it's declarative at all. I mean, I've been banging my head against the relentless declarativity of it for a while. I'd like to think if there were crutches, I'd have found them.

Unless the author is talking about something silly, like the fact that it's ultimately compiled down to an imperative language to run on a real processor . . . I can't figure out what he's talking about. And I don't know whether to chalk that one up to confusion on his part or inexperience on mine.


The author is very confused.

"Declarative" and "imperative" are just syntactic styles. Haskell supports both For example, the following Haskell expressions are equivalent:

  dec :: [Integer]
  dec = map (* 2) [1..10]

  imp :: [Integer]
  imp = do
          x <- [1..10]
          return (x * 2)
Due to their origins, stateful languages are traditionally imperative, and functional languages are traditionally declarative. But please don't confuse the two sets of terms.

                Stateful
                   |
                   |
                   |
                   |
  Imperative       |       Declarative
  -----------------+------------------
                   |
                   |
                   |
                   |
                   |
               Functional


Thanks for that.

I did know about the do syntax in Haskell, but I didn't think it really made the language usefully imperative. Sure, it's okay if you're trying to kick off two or three unrelated calculations, maybe if you're after side effects. But I didn't find it much to fall back on when trying to do something complicated and having a hard time thinking about how to break down the problem without for loops and case statements and value accumulators. Maybe it's more robust than I know, but it seems like such a small piece of such a big language. Calling Haskell imperative on account of that seems like calling perl OO because you can kinda sorta do basic OO stuff most of the time.

I am also having trouble understanding the claim that the ideas are orthogonal. I can see plainly enough that they aren't literally the same thing: an imperative language breaks a problem down into simpler steps, while a declarative one breaks a problem down into simpler definitions. And a stateful language accumulates progress in changing values, while a functional language does so by resolving expressions.

So far so good, but I can't really decouple the concepts. How would you break a problem down by time, but track results through logical resolution? Isn't the point of a step or instruction to leave something about the state of the program changed--I mean, isn't it inherently stateful?

It seems to me the advantage of imperative programming is that the steps are logically decoupled, just as the advantage of declarative programming is that the pieces are temporally decoupled. So how in the world would you make a declarative program accumulate progress via state?


  > But I didn't find it much to fall back on when trying to
  > do something complicated and having a hard time thinking
  > about how to break down the problem without for loops
  > and case statements and value accumulators.
When I'm porting some stateful code to Haskell, I usually start by transcribing it as closely as possible. That means using for-loops, using stateful accumulators, and usually putting the whole thing in an IO. Once you've got a bit component compiled and working, you can start factoring out chunks into smaller, pure functions. It's the same as working out a big tangle in some string: keep working and the mess will just sort of evaporate.

Of course, the difficulty here varies wildly depending on how good the original code is. Most stateful code is pretty ugly (lots of mutation, lots of tangled side-effects), but a few developers are competent enough to write good code in any language.

  > Maybe it's more robust than I know, but it seems like
  > such a small piece of such a big language. Calling
  > Haskell imperative on account of that seems like
  > calling perl OO because you can kinda sorta do basic
  > OO stuff most of the time.
I think you're looking at the word "imperative", and assuming a lot of properties that simply aren't included. A language is not imperative or declarative; a language can support writing in imperative or declarative style, and some languages support certain styles better than others, but it's not like one is much better than the other.

  > And a stateful language accumulates progress in
  > changing values, while a functional language does so
  > by resolving expressions.
Every program progreses by changing values. The difference is scope; a stateful language progresses by applying a sequence of mutations to a single state. Every procedure is a sequence of mutations to that state. Haskell programmers (somewhat snarkily) call that state the World.

In functional programs, state is still accumulated, but in fragments. Each function has access only to what state it needs, and its contribution to the program's progress is restricted to its returned value. Only IO values have access to the World.

That last sentence means that Haskell is also a stateful language. It is, effectively, a superset of C.

  > So far so good, but I can't really decouple the
  > concepts. How would you break a problem down by time,
  > but track results through logical resolution? Isn't the
  > point of a step or instruction to leave something about
  > the state of the program changed--I mean, isn't it
  > inherently stateful?
The point of imperative programming isn't necessarily state, but dependency. Step 10 depends on the results of step 9, which depends on the results of step 8, and so on. It might be that no step actually requires access to the whole state -- only one little corner of it.

In well-designed programs written in stateful language, a great deal of effort is spent on how to separate state from dependency. In Haskell, the compiler handles it.

  > So how in the world would you make a declarative
  > program accumulate progress via state?
With parameters and the call tree, primarily.


That was . . . really enlightening.

Thank you for taking the time to educate me. I've found the Haskell community uncannily welcoming to newbies, and you seem to be continuing the tradition.


To complete the exercise (in no particular language),

Imperative and stateful:

   var factorialResult = 1;

   factorial(n) {

     if(n <= 1) {
       factorialResult = 1
     } else {
       factorial(n-1)
     }

     factorialResult = factorialResult * n
   }

Imperative and functional:

   factorial(n) = {

     var lastResult = 1

     if(n <= 1)
       lastResult = 1
     else
       lastResult =  factorial(n-1)

     return n * lastResult

   }

Declarative and stateful:

   var factorialResult = 1

   factorial(n) {
     if(n > 1)
       pointlessFunction(multiplyBy(n), factorial(n-1)) 
   }

   multiplyBy(n) {
     factorialResult *= n
   }

Declarative and functional:

   factorial(n) {

     if(n <= 1) 
        1

     else
        n * factorial(n-1)

   }
Everyone learns imperative and stateful. Some people also learn declarative and functional. Good programmers tend to do functional when working imperative. And declarative stateful is possible, but just plain silly.

No value judgement can be made between imperative and declarative; do what suits the problem. Sometimes you're following a recipe for cookies, and other times you're parsing XML.

Functional is better than stateful, but not in a competitive sense -- not like ipods are better than walkmans. More in a pragmatic sense -- more like laws are better than wars . . . when the law will solve your problem. Sometimes you really do need to talk to the world. For everything else, there's functional programming.


Can you explain the imperative code? I understand that it has to do with lists being monads and all, but I don't get why

    (return ([1..10]*2)) :: [Integer]
doesn't work. What's the magic behind

    x <- [1..10]

?


That `do` expressions is equivalent to

    [1..10] >>= (\x -> return (x * 2))
and the definition of (>>=) for a List monad is

    instance Monad [] where
      return a = [a]
      xs >>= f = concat (map f xs)
`xs` becomes bound to [1..10]

`f` becomes bound to (\x -> return (x * 2))

we we map `f` over each element in the list and get mini-lists. `x` gets bound to each element in the list, x is multiplied by 2, and then returned into a list (1 becomes [2], 2 becomes [4], and so on). So we get:

    concat [[2],[4],[6]..[20]]
And then we concatenate the mini-lists together to get.

    [2,4..20]
As an aside, this is also equivalent

   [1..10] >>= return . (* 2)


Thanks!


As I understand, a mathematical proof is a declaration that certain transformations are possible. Really, it just points out truths that already exist.


The author of this article appears never to have used a functional programming language in his life. He misunderstands several key terms in the realm of functional programming and makes up a few of his own terms so that he can construct a strawman of what people claim functional programming is, which he can subsequently prove that it isn't. There are plenty of misconceptions about functional programming, but quite a bit more than combating them, he demonstrates how many he himself holds.

  > Myth #1 - Functional programming is Lambda Calculus
Of course functional programming is not just lambda calculus—there are actual data types besides functions—but I don't think this is what he's arguing. He's trying to argue that “simultaneity” (which I think he made up) isn't satisfied, and somehow conflicts with lazy evaluation. This is only a valid argument about lazy, impure languages, and I don't think any exist. All of the differences he's claiming exist between lambda calculus and functional programming don't exist between lambda calculus and Haskell.

  > In point of fact, the simultaneity constraints inherent in lambda calculus have been shown to make it unsuitable for issues involving concurrency (like multithreading), and for problems like that, it's better to use pi-calculus.
Whaaat?! Purely functional programming is much better suited to concurrency, because it completely satisfies “simultaneity constraints”, which makes race conditions, etc. impossible.

  > Myth #2 - Functional programming is 'different from' imperative programming.
... > Church's thesis, upon which we base our whole definition of 'computing', explicitly states that Turing machines, lambda calculus, while programs, Post systems, mu-recursive grammars, blah-de-blah-de-blah, are all equivalent.

The entire point of the Church-Turing thesis is there can be completely different models of computation that can compute the same problems, not that they're all the same concepts underneath.

  > For those quick thinkers out there who've noticed that this puts Myth #2 in direct contradiction with Myth #1, KA-CHING! you've just won our grand prize!
I'm not even sure what he means here. The statements that functional programming is based on lambda calculus, that it is different in significant ways from imperative programming, and that all models of computation can solve the same problems are not at odds in any way.

  > Myth #3 - Functional programming is referentially transparent
This is where I get the impression that he has never actually used a functional programming language in his life, and he completely misunderstands what “referentialy transparent”. Haskell, barring its unsafe* functions, is indeed referentially transparent, which means that expressions will always have the same value no matter when they're evaluated. His use of Perl examples, using features of Perl that directly violate referential transparency, to demonstrate that functional languages aren't referentally transparent, is bizarre.

  > Myth #4 - Functional programming doesn't support variable assignment
I don't even know what he's trying to prove in this section. Yes, functional programming does allow you to iterate functions on values to produce new values. Haskell's standard library provides several utility functions for doing just this. The point of not supporting assignment is eliminating mutable state, which results in referential trasnparency, and not letting the effect of one part of your code leak out in hard-to-predict ways and affect other parts of your code, both of which it does successfully.

His example makes no sense at all. He shadows a variable and then pretends that it's the same as the parent variable so that he can prove that variables don't always stay the same, because `x == y` is true in one scope and false in another. This doesn't change anything about referential transparency or having assignment, because referential transparency doesn't say that if you copy and paste an expression into a different scope it will have the same value.

I wish that instead of getting angry at other people for misunderstanding functional programming, he should try a little harder to actually understand it himself.


The goals of functional programming are to abstract away as much detail of the calculation as possible where as the goals of imperative programming seem to be providing a lot of transparency as to how the problem will be calculated. If you get away from being in one camp or the other you'll actually see that the two camps are moving closer together. Personally, I think the best example of this is C# (an imperative language with functional extensions) and F# (a functional language with imperative extensions).

Sometimes I really want to specify that a calculation must take place via a certain method (lets say newton's method and the performance improvements that are possible with certain hacks) and sometimes I don't care (such as partial application of a function).

The best situation is to put both tools in the hands of a programmer and let them decide what to do. Neither the functional nor imperative camps are 'right' but there are a lot of good ideas in both camps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: