Hacker News new | past | comments | ask | show | jobs | submit login

That's a weird take from someone named MLthoughts :p

But you seem to think everyone agrees that dynamic languages are more productive and that using (say) haskell is a trade-off for performance. For people used to (good) static type systems — for me that'd be OCaml — this is just not the case. Types do not impede, they help. I guess it might be a question of taste or habit, but don't make it a universal truth and accuse others of being biased when they disagree.




The “types do not impede, they help [clarity of thought / program structure / catching many important classes of bugs]” claim is hogwash.

I say that as someone who spent most of a decade doing Haskell and Scala professionally in large companies that built lots of developer tooling and workflows for them.

The most critical aspect of business software is to be able to drop into any particular local section of the code and make significant changes according to shifting business constraints. Anything that enforces a program structure that makes this harder to do, or requires a sequence of significant refactors around things like type class design / OOP interfaces and so forth, is strictly a loss for the business, not a win, even considering correctness, safety, a developer efficiency as critical success measures.

It’s often much worse than “a loss for the business” too, given that 99% of the time, those type class designs and OOP interfaces or nested inheritance models were premature abstractions and all the extensibility or well factored SOLID code (or equivalent ideas in FP) winds up being sheer debt that fails to be extensible in the ways that reality turned out to require but which nobody foresaw.

In a world with excellent foresight and ability to hit pause to refactor architectures, then baking in domain modeling constraints through type system designs would be great. Unfortunately that doesn’t map to the real world at all.


> The “types do not impede, they help [clarity of thought / program structure / catching many important classes of bugs]” claim is hogwash.

You say this, but then never address this claim in the text below. Could you expand on it?

You also do a lot of conflating typeclasses and OOP concepts like interfaces and inheritance, while those OOP concepts don't have much of anything to do with static typing and exist in object-oriented dynamic languages as well.

The "types prevent you from easily changing things to meet business needs" argument is one I've heard a lot but I'm not familiar with any concrete scenarios where that would be the case. Do you have any examples you can share from your time working with Haskell or Scala?


> “You say this, but then never address this claim in the text below. Could you expand on it?”

I believe I did answer this in my original comment, so I will just refer you back to that.

I disagree that there was any conflating going on. Type system designs enforced with static typing are a hallmark aspect of most of these design patterns around things like type classes, interfaces and inheritance. Of course similar things can exist in dynamically typed languages but they are not the same. For example “interfaces” in Python are just duck typing conventions (apart from built-in CPython data model properties). That duck typing interface is not at all similar to interfaces as a type system design pattern in a statically types language. Any similarity is purely semantic.

As for examples, one example that I worked on heavily involved a Scala system for managing DAG dependencies in task execution. The system was set up using phantom typing and a bunch of sealed case classes such that for any logical type of Task that could exist in a DAG, the task had an “Active” and “Passive” variant, where the “Active” variant could only be obtained through a monadic validator processing a “Passive” variant.

The goal was to use the type system itself to encode the concept of “this task has passed through validation and it’s allowed to be processed.”

Because this was designed at the type system level, it created huge problems and never added any real value in the sense of making it “logically impossible” to create invalid Tasks. Number one, it led to huge, painful boiler plate to create the case classes for every type of Task and specialize the type class with a “validator” function. Number two, we eventually realized there were many different aspects to “validation” that did not map well to the concept of “passing through a validator.” For example, some tasks depended on data that didn’t exist in the required location at a certain time, and hence needed retry logic to validate. Some situations involved re-running an already complete task (usually for resource usage observability reasons, or because an external data dependency changed). At any rate, baking validation status into a static type via the phantom type design was nothing but a headache. For all the beautiful code supposedly protecting us from processing invalid jobs, all that we got was difficult constant refactors.

Eventually we abandoned it and just used Luigi instead, and wrote all DAG management code in Python. It was the best decision we made. We lost zero safety and our defect rate did not get worse. Testing caught all the same bugs that compilation would have caught in Scala, and more, with less total code. And because the nature of the tasks in Luigi was just “whatever arbitrary Python you want” it was super easy to write effective validators, accommodate new use cases on the fly, and keep the code clean without dogmatic adherence to a precommitted type system design. Luigi happening to use some lightweight inheritance patterns was forgivable, given the dynamic typing flexibility.


Thanks for the example, always good to hear about some real-world experience with this stuff. I'm curious if you think that the rewriting of the system itself also helped improve the situation, as I've found that oftentimes if I rewrite something, I'm able to use what I learned from working on the original iteration to design things a bit more effectively. Not saying that if the initial implementation had been in Python it wouldn't have been better than the Scala one, just curious how much of a difference you think that made.


Certainly there’s a lot of credit due to “lessons learned” - but they key part is that the main lesson learned was to prioritize spot change flexibility over a “pluggable” model of extensibility enforced with a type system design and rigidity. Any smaller scale tactical improvements in code structure paled in comparison to that core property.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: