I love Forth. The reason is that you get results fast. The incremental nature (just try some functions and build a word) eliminates the long design phase of a “carefully planned” project. I don't use Forth exclusively, I have experiences with a lot of other (very different) programming environments, from C++ to VHDL and Verilog.
Yes, with a careful design, you can reduce the debug portion of the overall design time, perhaps even down to 20% (I haven't seen that, 50% still is more typical). How? By expanding unproductive time (meetings, head scratching about specs, etc.) to the other 50%, with a total of several times the Forth programmer takes, even though he's doing 80% debugging. And it's not just debugging code that takes debug time. The worst bugs are specification bugs, i.e., things that once looked good when you wrote the spec, but now look bad. Spec bugs are by ways the hardest to fix in a classical waterfall design approach. When I'm asked about how many lines of code I write (the classical way) in a day, I get the response that (being a touch-typist) I should be able to type it in a few minutes. You can't reduce the amount of time you spend typing code. You can only reduce the other non-productive aspects, with the result that debugging finally dominates everything. That's a good sign. Debugging is the only feedback you get, and the feedback is the important part.
The Forth approach is typically to tinker with the problem first. Most of what's written during that phase gets thrown away later, but it increases the understanding of the problem by an amount unreachable by “design.” This principle is not exclusive to Forth programmers, it's (for example) part of the Extreme Programming (XP) philosophy. But the interactive nature of Forth enhances that.
There are some factual errors in Jack Ganssle's “I Hate Forth.” Forth is not just an interpreter that stores text (and comments) in the target. Forth is an incremental compiler. The generated code varies from simple and small address interpreters (threaded code), where the “inner interpreter” is an indirect jump to full-blown native code. Also, systems on small targets often have the outer interpreter (the one for interpreting ASCII commands) and compiler in the host.
Forth words (subroutines) also are rarely as long as 50 lines. That's considered bad form. Forth words should be typically one or two lines long (excluding comments and special indentation, e.g. control words on a line of their own). This follows Dijkstra's principle that programs either are so simple that they obviously contain no bugs, or so complex that they contain no obvious bugs. It's clear that you can only reach reliable operation by doing the further. Forth follows traditional bottom-up engineering where you build larger parts out of proven smaller parts. The implicit parameter passing (and multiple return values) eases factoring of larger functions to an extent that can't be easily duplicated in more popular languages. Also, the interactive testing allows you to test even small words without the overhead of writing test procedures, so there's no gap between writing and testing. Writing a few hundred lines of code and then testing often results in long debug sessions because it is not clear which of the few hundred lines doesn't work).
Documentation in a Forth project often is delayed. When we try something we know we will throw away, we don't write documentation. When the program is done and works (no more redesign necessary), documentation could be written. But managers drag programmers away to the next task, leaving undocumented code. IMHO undocumented code is often a management problem. In the waterfall flow, documentation (even if slightly inaccurate) is generated up front, and therefore non-existent documentation blocks the code to be written. In the Forth model (or XP model), documentation is generated as last step, after fully functional code is reached. Omitting it is bad, but that's only recognized when starting the next project.