Meanwhile, conventional languages have caught up -- I don't think that John Hughes' arguments still hold. Let me briefly explain, why I think so.
Lazy evaluation is a fascinating feature. It delays a computation until a result is really needed. Haskell, for instance, is based on this evaluation paradigm. In Lisp or Scheme, lazy evaluation can be simulated with so-called streams.
With lazy evaluation, code can be in fact highly modularized. One example in Hughes' paper is the Newton-Raphson algorithm for finding square roots. I re-wrote the code in Scheme (using DrScheme), which looks like this:
(require (lib "stream.ss" "srfi" "40"))
(define (next_c N)
(/ (+ a (/ N a)) 2.0)))
(define (repeat f init)
(stream-cons init (repeat f (f init))))
(define (within eps stream)
(let ((a (stream-car stream))
(b (stream-car (stream-cdr stream))))
(if (<= (abs (- a b)) eps)
(within eps (stream-cdr stream)))))
(define (my-sqrt guess eps N)
(within eps (repeat (next_c N) guess)))
The point is that the algorithm for approximating the square root ("next_c") is cleanly separated from "within". Even better, "within" is not specifically programmed for "next_c", it can be used generically! That's what Hughes means, when he talks about modularization. The concerns (approximating the square root and checking if the difference of two subsequent values of a stream lies within a threshold) are separately put in independent functions and used in combination by a combining function "my-sqrt".
If you take a language like Python, which offers generators as a language feature (see also here), you can easily modularize the code in the very same way. With generators, streams can be easily emulated. Here's my hack:
a = (a + N/a) / 2.0
g = generator(*args)
a,b = g.next(), g.next()
while abs(a-b) > eps:
a,b = b, g.next()
The code is practical identical in structure and lines of code. Generators enable a programmer to modularize code in the very same way as is provided by streams and/or lazy evaluation.
Higher-Order Functions (HOFs) are functions, which can digest one or more functions and/xor return a function as a result. In pure functional programming, HOFs are a must, otherwise, function composition would be impossible.
Other languages are not built around the notion of a function. Their basic units of handling are, for instance, objects or components. In object-orientation or component-orientation you have other means of modularization. So, there is limited need to simulate HOFs. Despite this, most "conventional" languages provide some sort of support for HOFs. In Python, for example, there is a built-in "map" and "reduce" function. It's even possible to return functions as a result; Python supports lambda expressions.
Before someone misunderstands me: I would still agree that functional programming matters. But in a different way. It is just not a matter of "modularization glue" (in the sense of John Hughes) any more. The gap between functional and imperative languages is closing!