Hacker News new | past | comments | ask | show | jobs | submit login
Our journey in dropping the ORM in Go (alanilling.medium.com)
107 points by henvic 62 days ago | hide | past | favorite | 147 comments



So frustrating. You got like 80% of the way there, and then went "nope, too much work" and diverted to add more complexity.

The answer is to write the SQL yourself, and the scan methods yourself. Code generation is better than ORM, but still a wrapper, still adds complexity, and still brings problems.

Yes it's a pain in the arse to write all that boilerplate in one go (pun intended). But if you'd started without an ORM you'd have written them one at a time as you needed them and barely noticed it.

Keeping your code aligned with your database schema is very little effort - database schema changes are usually rare and significant.

I write a view for each access method (so I can change the schema without worrying about changing every access method), and a function for each update/insert/delete (for the same reasons). It takes maybe 20 mins to write the whole set each time I have to add a feature with new data, which is a rounding error in the time it takes to write all the rest of it.

The point is that the database schema is optimised to storing data in the best way possible. The middle layer is optimised for processing data in the best way possible, and the front end is optimised for displaying data in the best way possible. None of these three things are equal. Use an interface between each of them. The interface is important and needs to be carefully considered.


  > I write a view for each access method (so I can change the schema without worrying about changing every access method), and a function for each update/insert/delete (for the same reasons).
God how I hate supporting such architecture. It loses edit history unless you're very careful (which many developers are not) and do absolutely every piece of DDL in a versioned migration, and makes DDL extremely painful because you frequently have to drop two dozen functions and views (which depend on each other recursively) and then recreate them.


We’ve started using views and friends a lot more recently (now I’ve recovered from my PTSD from a lifetime ago when I worked in a poorly managed sql server shop). As you point out, they’re often not first class citizens.

On the flip side, we’ve just moved ours to files on disk (one per object) and hacked in some tooling to autogenerate the migrations from them (we use alembic / sqlalchemy which already does a good job here).

It’s kinda the promised land. Edit code on disk, commit and review as usual.


this. Don't write your SQL into the database. Write them as scripts (starting with "drop XYX", then "create XYZ", check them into git, and treat them as code.

Migrations are for schema changes. Views and functions are not schema.


Wait. People don't do this?


In the dark times even SCM was rare. We were ignorant cowboys strait out of school. We learned things the hard way. Then found out most such problems were solved decades before.


No, most don't


I understand why people might hate the idea of using raw SQL when I suggest it then! I always assumed it’d be version controlled and 100% replicable (minus data) from sql scripts


> I always assumed it’d be version controlled and 100% replicable (minus data) from sql scripts

It probably goes a bit deeper than that:

Some have version controlled migrations, in both directions (both adding changes and reverting them) and have logic that allows getting to the current state from an empty DB/schema, even generating some test data (seeding) along the way if needed, e.g. in a dev/test environment.

Others probably are half way there, utilizing whichever of those practices make sense, though sometimes using all of them is impossible for either social reasons or historical ones (e.g. the first migrations weren't properly versioned, so without a full data/schema import or a baseline version, it's impossible to run all of the migrations sequentially and start with an empty DB/schema.

Others don't bother with any of that and handle everything in an ad hoc manner, which may work for them, but personally i'd strongly advise against that, because in cases like that touching the DB becomes a lot more risky, unless you do a backup/export before any of your own changes.


I’ve worked in all those places (currently in the fully versioned model).

We occasionally do a full reset using an export of the current schema as a new base and then start versioning again from there. So there’s not really a good technical reason to stop people from getting a working runnable migration system going - other than social, as you say.


> So there’s not really a good technical reason...

One that i can think of is the application depending on some set of data being present in the system, which was added ages ago and that no one has any idea about across the hundreds of tables.

Furthermore, you probably can't just do a full export/import of the initial state, because everyone having to download about 2 GB of data a large portion of which is auditing related information probably also isn't all that good.

Well, that's more of a technical aspect that's caused by social factors, to be honest, but probably is a blocker nonetheless. Especially if you can't convince anyone to spend a week exploring the schema and setting up this baseline migration & data, when things currently "work well enough" and when the company doesn't give you a data storage solution to keep it at (e.g. NextCloud), doesn't let you use Git LFS or something like that, anyways...


the data in your system is massively more important than the code. Not spending a week exploring and completely understanding your schema is going to hurt you in the long term.


I spent years writing ad-hoc SQL, and had no occasion to learn about or use ORMs, but I did dabble a little in T-SQL and PL/SQL, and they just seemed wrong to me. Not integrated in a clean manner, not derived from a single vision, and so on. I guess what people call an "impedance mismatch". There is excessive overhead just from switching from SQL to the imperative language. I've gotten four orders of magnitude speedup by reformulating a T-SQL script as a single SQL query.

But recently I learned about Power Query (the "M" language standing for Mashup) and I immediately wanted so badly for SQL to acquire all the good aspects of it. Not as a separate module, but perhaps as a successor language.

In particular, the way it allows functions (and higher-order functions) to be defined is wonderful.

Many people don't like SQL, but I never had the feeling that here is what it's missing until now.

I qualify my comments above by mentioning I have no experience with OLTP or in general SQL that needs to be performant on the millisecond to second timescale.

Instead, I've always been working with queries on the basis that they need to run in 5 minutes, or ten, or fifteen, or an hour, as opposed to a day, or a week, or a month.


I'm shocked more people arent talking about SQLBoiler in threads like these. It solves this exact problemset. You write the SQL schema and it generates all the scan and helper functions for you. We've had a great experience with it at work after running into similar woes as OP with ORM's.

https://github.com/volatiletech/sqlboiler


What are your thoughts on what the article author has to say about SQLBoiler?


For larger and longer running projects, I prefer sqlc. There is zero magic involved, but it still generates the boilerplate for you. If for some reason you decide you don't like it, you can just pretend it never existed.


> I write a view for each access method (so I can change the schema without worrying about changing every access method), and a function for each update/insert/delete (for the same reasons).

Database view and stored procedures?


yeah. Postgres has functions, SQL Server has sprocs.


Putting all SQL into the database gets you two huge advantages: First, all relations and dependencies are visible for DDL changes, and what the code uses is easy to see without looking through the code. Second, a test database can use (near) live data with the new DDL and new code, making testing much better especially for non-development staff who can spot issues based on the data or UX.


> I write a view for each access method (so I can change the schema without worrying about changing every access method), and a function for each update/insert/delete (for the same reasons)

I’d appreciate it if you could expand on what this means exactly.


Not OP, but sounds like they're using views for reads and functions (stored procs) for writes.

Idea is to insulate code from schema changes as much as possible. If the underlying schema changes, it's possible to sync the views/functions to those changes so the code itself isn't impacted, as it's just talking to the views and functions. Kind of an exposed interface to the DB.

It's not perfect, and some schema changes will invariably require code changes, but should be minimized.

There are pros and cons to this approach and I personally don't favor it, although I am aware of some shops that made it an absolute requirement--especially pre-ORM.


What is the idea behind using views? I have never used them


Not OP but two main things:

1. Access control - you can create new SQL accounts and only give them access to select from specified views

2. Encapsulation and implementation hiding - the underlying table structure can change as long as the views exposed doesn't change.

At least that's all I know. I'm sure people will add or correct when they read this.


These are correct - some others are:

- ability to do transformations uniformly e.g. if you want to represent a value in a certain way uniformly instead of having to do that in multiple places

- ability to get generated values (like 'create_ts() - now()' to get the age of the record) directly from the database instead of computing it repetitively

- move the join mechanisms on the database side to do those efficiently instead of the ORM having to do it (sometimes suboptimally); also helps with joins that are repetitive

- ability to apply certain "default" filters e.g. if you are always fetching with predicate 'active = true', then you can include that in the view


I used views several months ago when I implemented a tracking database for AWS SSM parameters: I wanted to retain a history of all changes to the parameters over time, and the schema was complex enough that queries for simple things like “what’s the current value?” weren’t simple at all.

So, I wrapped common queries in views.

(None of this should be taken as advice for how to approach such a problem.)


What if there are no layers and you can’t think outside hierarchy?

There’s no reason for anyone else to take your view as sacrosanct since many many many apps and services are working just fine without ORMs.

You might accomplish something interesting today if you set aside protecting unimportant ephemera you talked yourself into believing.


As a junior dev I had no idea what an ORM was.

As a mid-level dev I discovered them and wanted to use them everywhere.

As a senior dev I've gone back to manually writing queries.

Such is life.


And when you eventually ascend to true mastery, you'll be able to enter a zen-like trance, in which you'll be able to tell when to use one or not, or indeed whether to use both (e.g. ORM for simple CRUD stuff, hand-written queries for everything else).


This is the way.

ORM’s can be a huge time saver and simplify your code base for the day to day CRUD, nobody should be writing that anymore IMO.


The problem is that most apps start out as simple CRUD apps. Also each dependency is a liability. People really underestimate how pulling in dependencies can lower your software quality. Most people have security bugs in mind, but there are also performance issues, memory leaks and logging-issues, needles abstractions and incompatibilities.


It's not a problem that they start as CRUD apps. That's exactly the place where ORMs or query generators should be used: CRUD code is always the same and as such should be 100% automated.

So then when something different than CRUD happens? You use the ORM, a query generator, or completely custom SQL, depending on which is the most appropriate to the situation.

Performance doesn't need to be an issue if the query is extremely rare; if it's core to your app, by all means, write completely optimized SQL. But if it's an extremely rare query or only shows up for your admin page? ORM code should be fine.

That's the point of the parent comment: When you get good enough you learn where to use each.


In my case I always wants to do something that is NOT CRUD, but guides on Internet always want to teach me how to do CRUD apps.


As a senior dev, I use an ORM unless I have a reason not to, and I'd know why or not fairly early.

Why do I use an ORM? Because they're way more maintainable for a wider range of devs, with fewer accidental footguns, and the good ones let me do direct queries too, so there's little downside.

So I default to using an ORM like sqlalchemy , because I can hand it off to a junior dev afterwards, and they can easily keep it going without needing to be writing good SQL directly.

The other advantage is that I can abstract over multiple databases easily. So I can have an sqlite db for quick local testing, and a postgres DB for more thorough testing and another for production.


If you can’t trust your junior devs to write SQL nor have sufficient peer review (like pull requests on feature branches) to assist with their training then you should be using stored procedures and remove that responsibility from your developers entirely. Either that or sending your juniors on training exercises to level up their SQL experience.

And you don’t need ORMs to abstract away the RDBMS engine from your main codebase. Nearly every programming language these days (even statically compiled ones which support almost nothing in the way of dynamic code, like Go Lang) will allow you to do this.

I’m almost all instances where people think they need ORMs, what they really need is better software development practices in the wider sense and ORMs were just an excuse to hide those larger problems.


Your first statement applies to anything and everything though. If I make any decision that my junior devs aren't comfortable with, I should either do the work for them or train them up. But that's more time and/or effort, when the easier solution for both of us gets us there faster.

What programming languages abstract away all the as many details of the RDBMS as something like sqlalchemy? Sqlite and postgres do enough things just slightly differently that I'd have to take slightly different code paths to support both. At the very least they'd both require different drivers.

In almost all instances where someone suggests that what is really needed is better software development practices, they're going to be outperformed by better tooling and people using the easier route for the most people.


> Your first statement applies to anything and everything though. If I make any decision that my junior devs aren't comfortable with, I should either do the work for them or train them up.

You train them up. You don’t do the work for them nor do you intentionally cripple your application to avoid your responsibility of training them up.

A healthy organisation will have feature branches, pull requests with peer reviews. You’ll have unit tests, and CI/CD to enforce those tests. You’ll have code coverage analysis too. This is all pretty standard stuff these days so there’s no excuse in wussing out from supporting your junior engineers.

> What programming languages abstract away all the as many details of the RDBMS as something like sqlalchemy? Sqlite and postgres do enough things just slightly differently that I'd have to take slightly different code paths to support both. At the very least they'd both require different drivers.

You don’t initiate the drivers inside your SQL queries. In fact Perl has had tools to abstract that away for 2 decades! PHP, Java and Go abstract away the driver initiation too. In fact of the dozen or so different backend languages I’ve used over the years, I can’t think of any where you couldn’t go that. More over it is the correct way to architect one’s code. This is basic stuff, web 101.

Yeah some SQL syntax might differ from one RDBMS to another. If that’s the case then your dev environment should mirror the production environment (frankly that should be the case anyway otherwise you’d end up back in the “it works on my machine” arguments of the last decade; back before we learned how to do this shit correctly).

> In almost all instances where someone suggests that what is really needed is better software development practices, they're going to be outperformed by better tooling and people using the easier route for the most people.

But you’re doing the exact opposite of that by not using git feature branches, CI/CD, abstracting away your DB connection (which leads me to wonder if you’re even using connection pooling), unit tests, consistent development environments and all the other advancements of the last 20 years.

This isn’t a hypothetical argument either. I’ve been in the industry for decades, have worked on workflows like you’ve described as well as ones like I have. When I’ve worked in shops like you describe and the quality of the output was embarrassing. I’ve also managed places where I’ve then introduced the aforementioned and the number of bugs in production plummeted, the amount of QA time needed dropped, the number of times we’ve had to rollback a release have dropped to zero, the junior developers are now snake to work across more projects (giving more room for more projects to happen) and the CEOs impression of the team has gone up significantly.


You say it's not a hypothetical argument, yet the majority of your comment is based on a projected hypothetical.

In fact the majority of your argument isn't even focused on the discussion of an ORM but on a situation you made up.

----

That said, let's address your frankly odd rebuttal...

Training takes time , and in a lot of cases is going to have a negligible gain over just using the ORM to begin with.

Regarding everything else like CI, branches etc... (you're now making a strawman but let's address it anyway) Of course those are important, and they have a way smaller cost per dev than learning a new language, and making sure every PR is valid against the current schemas of the DB, without introducing accidental performance footguns.

In the end it's all about all the forms of cost, which is exactly what my initial post said and you seem to have glossed over. Using an ORM isn't free for every use case, but in many use cases it's cheaper than training up people, especially when this isn't a regular part of their job. Perhaps you're assuming a job where everyone is constantly working on DB interactions but a lot of gigs have stuff like this as a small part of the job.

If they were only doing db interactions, then obviously the cost calculation changes.

I'm also not talking about cost in monetary terms. I'm talking about cost in terms of time, effort, financials, planning etc...

I'm not sure why you suddenly suggest I wouldn't be using git branches or CI to try and make your point? Again, those have a better cost to benefit ratio. Every single decision is down to that.when picking a project.


> In fact the majority of your argument isn't even focused on the discussion of an ORM but on a situation you made up.

No, the vast majority was explaining how development best practices resolve the common use case that is being presented here for why ORMs are a good thing (namely “my junior devs are somehow too stupid to learn SQL”).

> That said, let's address your frankly odd rebuttal...

What’s odd is that you think setting up guardrails to prevent human error and then empowering your developers to write good code is itself odd.

> In the end it's all about all the forms of cost, which is exactly what my initial post said and you seem to have glossed over.

I covered that point with my anecdote, where I talked about the improvements that best practices bring.

> Using an ORM isn't free for every use case, but in many use cases it's cheaper than training up people, especially when this isn't a regular part of their job. Perhaps you're assuming a job where everyone is constantly working on DB interactions but a lot of gigs have stuff like this as a small part of the job.

It doesn’t cost any extra to train people up using the development practices I’ve outlined. Training just becomes a side effect of day to day reviews. However ORMs have a very real performance cost that will have an ongoing bottom line in your infrastructure performance and hosting costs.

One is a free win/win scenario and the other is a lazy hack that adds costs.

> If they were only doing db interactions, then obviously the cost calculation changes.

You’re underestimating your juniors ability to learn. Or you happen to work in some really toxic environments (which might explain your other view points). But you don’t need to clock up 40 hours a week on SQL to become competent enough to write most queries.

Remember we are not talking about making them DBAs, just then being competent enough to be at least as good as your average ORM (which, frankly, isn’t hard).

> I'm also not talking about cost in monetary terms. I'm talking about cost in terms of time, effort, financials, planning etc...

I’ve been factoring all of them in as well. I’m a cost centre owner and people manager and so I’m well versed in how to equate costs correctly. I’ve had to do exactly these kinds of studies professionally.

> I'm not sure why you suddenly suggest I wouldn't be using git branches or CI to try and make your point?

Because you make the argument that getting juniors to write SQL will lead to footguns and I’m explaining how you manage that correctly.

> Again, those have a better cost to benefit ratio. Every single decision is down to that.when picking a project.

No, those practices should be default for any all commercial projects. If you’re knocking up a POC or building something for a hobby then feel free to do it however you want (within reason). But anything built as part of an internal system or external product, and especially if it needs to be maintained for any duration of time, they should all be developed using the principles I’ve described.


Nowhere in any of my posts did I say juniors are too stupid to learn. Much like your previous comment, you are filling in your own blanks.

Every thing has an associated cost. An ORM in a familiar language, like sqlalchemy in Python, is easier to learn than learning SQL from scratch, and it provides guardrails like schema enforcement that SQL does not. That means anyone, not just a junior, can drop into a codebase using it and be able to quickly use it

You seem to be under the impression that an ORM has to have a significant performance penalty. While there's of course going to be some, for the vast majority of tasks that I use it for, it is negligible versus the rest of the IO happening. If there is a significant penalty in some edge cases, I can still use direct SQL queries. Again, like my original post, I know when and where to use an ORM and as I've tried to explain to you multiple times, I know it's not a silver bullet solution.

In your previous comment you simply went off the rails talking about branching and CI without tying it into your argument. If your argument is that CI will catch the issues, CI can only catch some issues. It too isn't a silver bullet, and relies on code review to catch some other issues. Why not just use an ORM to add that security up front? It doesn't replace CI or code review, but it greatly reduces the risk of each stage failing.

> they should all be developed using the principles I’ve described

Okay, you do you. You've dissalowed any reply of situational pragmatism for your own fastidious absolutism.


> Nowhere in any of my posts did I say juniors are too stupid to learn. Much like your previous comment, you are filling in your own blanks.

Easy conclusion to make when every post is full of reasons why juniors can’t learn SQL ;)

> is easier to learn than learning SQL from scratch,

I very much doubt many juniors are learning SQL front scratch. They’d have learned some already at college. And if they don’t genuinely don’t have any experience with databases what do ever then you bloody well need to skill them up irrespective of whether they use an ORM. So again, your argument here has no teeth.

> In your previous comment you simply went off the rails talking about branching and CI without tying it into your argument.

I’ve explained how that ties in several times now. There’s a difference between you not understanding best practices and me not explaining them

> If your argument is that CI will catch the issues, CI can only catch some issues. It too isn't a silver bullet,

I agree

> and relies on code review to catch some other issues.

Not just code review, automated tests, vets and so forth too. But I also agree that code review is a key part.

> Why not just use an ORM to add that security up front?

Because ORMs aren’t a silver bullet either. Except you’re trading in due diligence with lazy abstractions that add runtime overhead.

> It doesn't replace CI or code review, but it greatly reduces the risk of each stage failing.

It doesn’t reduce the risk of it failing. It reduces the ability for code to be vetted. It adds overhead and runtime costs. It adds complexity and the only benefit is you can keep your juniors blissfully in the dark about how your database works. That strikes me as a stupid compromise.

> Okay, you do you. You've dissalowed any reply of situational pragmatism for your own fastidious absolutism.

I’m not being absolute. I’ve said POCs and hobbies can bypass these best practises. And I’m being pragmatic when I say that every organisation I’ve worked at, both as an engineer and as a leader, has benefited from these best practices. Hence why they’re called “best practices” rather than “pointless processes”. Every organisation I’ve worked at where I’ve seen teams bypassing these best practices have been dysfunctional. That might work for you but it results in shit code — which is likely why you’re dependent on ORMs to fix things for you. And I urge you to read more on this typic rather than just taking my word for it. :)


Nowhere have I said juniors can't learn SQL either.

This is such a pointless argument anyway. You aren't bothering to read what I'm writing, you're twisting what I'm saying in what little you do read by filling in your own blanks, and you're insinuating that I won't use other best practices just because I won't agree with your subjective definition of best practices.

I have read many arguments about this, and again, I am experienced enough to know when to use it. The issue here is you have a completely condescending view of anyone who's not sharing your exact view. Your posts repeatedly insinuate that your experience outweighs my own, without actually providing anything other than YOU think ORMs are bad.

Not every college teaches SQL, not every junior comes from the same technical background, and you're assuming a lot more about someone else's workplace than you have any basis to.

You're also kind of a jerk by further insinuating that I put up with shit code. Perhaps I just don't put up with shit personalities who have a chip on their shoulders.


> Nowhere have I said juniors can't learn SQL either

Then why invest so much effort trying to convince me that they shouldn’t learn SQL or that ORMs are better than learning how the underlying database manages their relational data? ;)

If I come across arrogant it’s because I’ve spent meany years managing fixing a great many failing dev teams and the comments you’re making resonate heavily with those who are usually against the changes I make. They always come round to agreeing that those changes are for the better after living with them for a few months.

But I guess we’ll never prove that on a message board so I’m happy to agree to disagree for the sake of etiquette.


ORM is that one layer that fails to abstract anything substantial yet whose existence is intrusive enough to annoy people.

The worst example of leaky abstraction.


ORM makes the easy things easier and the hard things harder. No thanks.


So choose an ORM that lets you easily drop down to plain SQL in the rare cases where it's necessary and you get the best of both worlds.


100%. You don’t have to trash your 5 passenger car because you occasionally need to transport 6 people.


You also don’t buy a 5 seater car to begin with if you already know you need 6 seats.


Or just do away with your ORM entirely and train your staff how to write SQL so they can be effective regardless of the complexity of the queries.


Can you provide some examples? I use sqlalchemy, and the amount of work that it does for me when it comes to handling mutating data is pretty remarkable.


No real specifics, but I usually run into ORM in the context of I'm the person on the team with experience running MySQL and someone has a slow query. When they're hand built queries, I can usually provide a query or sequence of queries that provides the same data without knocking the server over and it gets put into production within the same day. When they're ORM, it takes days to find the query, and then more days to bend my query sequence to fit the ORM.

Sometimes you can add indexes to fix things, but sometimes you really just need to do a simple query followed by a large number of single row queries unioned together, but that doesn't seem to be a feature of the ORMs I've been adjacent to.


Yeah, I remember now the pain of using bad orms. That feeling that you know what you want, but it’s going to load the data via configuration that you can’t bend into shape.

Sqlalchemy is not that kind of orm. If anything is running slow in production I can find it from the logs and then track it to the code quickly - then you can do any sort of query you want in its place. For me it’s never the orm loading stuff that’s slow, it’s always the more complicated queries we’ve done for reporting or whatever.


You should be using an APM like NewRelic to find those slow queries. If that's not an option you should have a profiler that tracks how long each query took and have it print out the query to a log. Turn it on for a couple hours during peak time and analyze it for slow queries.

Also use static table design when applicable and avoid running any query that locks the table up. If the query takes any longer than 500 ms you should be running it as a background job instead.


Finding the bad queries is easy --- mysql has a slow query log since forever. The problem is finding where the query was generated. And convincing the people who wrote the bad query to use a better one.

Even without that, show process_list or whatever it is tells you how long the query has been running, with mytop you can just look for anything that's been running for more than a couple screen refreshes and start yelling from there.

Usually, when I'm brought in, somebody else designed the tables, poorly, too. There's usually not a whole lot I can do about that, because I don't usually have enough authority to demand a data migration.


That's very true and NewRelic has a built in trace route as well as a way to sort and view the slow queries. Even breaks down and does an analysis on the query. Coupled with logging down to the individual thread. It's very powerful.


I lean nowadays towards:

For ~90% of your database use, ORMs do basically the same queries and get rid of a massive amount of boilerplate and potential errors. Use 'em so you don't spend all day manually mapping object fields to the right slots in SQL statements.

For the other ~10% of database use, ORMs make it a massive pain to write a query that does what you actually want, or do something boneheadedly inefficient. So use an ORM that makes it easy to drop to pure SQL when you need to, and don't hesitate to do it if the situation calls for it.


As with everything, it really depends on what you're doing. Do I want to implement an entire CMS and core business backend directly in SQL? No, of course I don't. I want to use Django's ORM because then I get all the nice QuerySet API, integrations with forms and filters, the nice Admin UI, etc.

But do I want my end-users to hit my Django ORM APIs 10,000 times per second? Absolutely not. So you write hand-optimized SQL for those and maybe run them in a service a little bit faster than Python/Django/Gunicorn.

It's the same database, it's the same "API". But you get the value of everything that Django's ORM provides and you just cheat a little bit by dropping down to the lower level for things that need it, and not everything needs it.

Don't throw out the baby with the bath water!


I've gone past this and have a very clean and efficient query builder in my ORM. I maintain one for work and another is my open source PHP framework (15 years old now).

Mine supports simple joins and relationships but anything beyond that requires writing custom queries.


How does your query builder work? Is it an API that you use to build queries during runtime or do you generate database access code from queries?


The framework has 3 layers, Query Builder -> ActiveRecord based ORM -> CRUD Controllers

The CRUD controllers are then mapped with a router to different paths.

When you create a CRUD controller you extend the base class which has everything you need and then you set a single parameter, className which sets the object for which this CRUD controller is for. The base class works out of the box with front ends like agGrid.

The query builder itself is only part of the puzzle.

Every single ORM in existence is converting the object's data from the native types of the language into the types of the database and vice versa. There's almost always some configuration for each object type in whatever format from native code to yaml to xml to plain text. Every ORM has a toDatabaseValue() normalization function that also goes in reverse when pulling data from the database. Some ORMs will call this "data mapper" or "mapping" or "normalization". When this becomes evident the queries themselves become suuuuper simple because what you send into your INSERT or UPDATE queries is a complete array of strings with field values already predetermined. This is necessary because a database engine might have dozens of types like enum, string, text, longtext,smalltext,bigtext,varchar, char but all of them in your language could be "string". Advanced ORMs will know the database engine you use and even warn you if you try to save a string over 255 chars to a (255) char database engine but most don't.

My code is available here. https://github.com/Divergence/framework/blob/6af1b6b0e56b25c...

Where people go wrong is trying to do this field normalization logic inside their query builder. That leads to all sorts of problems. The query builder shouldn't know what is going on with your ORM at all. It's just gluing strings together to form a query.

You can see my dead simple query builder here: https://github.com/Divergence/framework/tree/develop/src/IO/...

I honestly used to not have one as it's so simple but decided it was a good abstraction to assimilate from other frameworks. My query builder does not attempt to sanitize anything. The ActiveRecord class takes care of that through the data mapping conversion functions for all the basic HTTP CRUD functionality.


Interesting. Thanks for sharing!

I wrote something for Java roughly 20 years ago (before ORM was really a thing). Of course I enhanced as needed over the years, but have used it heavily. It supports any DB with a JDBC driver and has easily saved me more time than anything else. There's not even a close second.

It's a pure code generator with a UI on top. You write an SQL select stub that references columns/tables for one or more tables, then provide the class name you want to generate. You then add any number of where clauses (including join conditions and parameterized placeholders). For each where clause, you use checkboxes to indicate what you want generated: select, update, insert, upsert, delete.

When the code is generated, all column-to-object field mappings, including types, are resolved through inspection of metadata (provided by the JDBC driver). Property names for the resulting domain objects are also generated using snake to camel case translations and, of course, methods resulting from parameterized statements include parameters of the correct types.

So, instead of having, say, a User class with a Profile class, which has an Image class, each with its own properties representing the full set of columns for the underlying table, you frequently need a limited subset of each combined into one object. For instance, a ProfileListView might need username, email, first and last from user; profile_pic_id, slug, and short_bio from profile; and thumb_filename from image.

There's more to it, but I took this join approach vs the "one table to one object" model because I found early that you frequently need different views of the data, based on different use cases. So, it's a way to flatten the underlying relational model when needed.

Not perfect, of course, but the best thing going when I built it. And since, I never found an ORM that was worth the switch.


I guess that there’s not many people going to write a long essay on how they use ORMs moderately, knowing the limitations, and it’s perfectly fine. It wouldn’t gather a whole crowd either.

What feels wrong to me in this post is the lack of nuance on how they used their ORM. For instance the whole part on how the ORM generated queries they wish were done in a different manner: why didn’t they bother writing either lower level code or just raw SQL to get their data as they wished ?

It looks to me they could have kept their ORM for the 80% cases it works great, and hand tune the 20% that could be problematic. Just like any other optimization problem.

There would be still the performance part where some overhead would still be there, but it’s usually not an issue if the ORM option was on the table in the first place.


> It looks to me they could have kept their ORM for the 80% cases it works great, and hand tune the 20% that could be problematic. Just like any other optimization problem.

It doesn't make for "good" blog posts I suppose. Personally I find these discussions to be tiresome and depressing. Take for example this seven year old post from the creator of Hibernate[1]. Predates a lot of todays "seniors". This field is clearly stuck in local maxima on this and a few other areas. Will there ever be anything more than this endless trial and error?

[1] https://www.reddit.com/r/programming/comments/2cnw8x/what_or...


The hate should be directed at SQL builder part of most ORMs. Which gets complicated pretty quickly, and it's like learning a new language when SQL works usually much better.

If you write your SQL, and some tool, like code generation, then maps it to object, you are golden. It's the part you'd have to write anyway. However if the code generation goes bust (like gets abandoned like most hobby mappers), you should make sure you can take the code and continue like you never used the generator in the first place.

It looks like the authors suggestion SQLC is pretty close to that. It seems to generate a lot of code though, so it might be an issue if you don't want to maintain SQLC, test it out in here: https://play.sqlc.dev/


I've been a huge fan of Slonik, https://github.com/gajus/slonik, which makes it easy and safe to write direct SQL queries. The author of Slonik had a good article about the downsides of query builders, previously discussed on HN, that I totally agree with: https://gajus.medium.com/stop-using-knex-js-and-earn-30-bf41...


On the contrary, I hate dealing with the various escaping characters for SQL, and to me SQLAlchemy expression is a life-saver. It gives me back a textual sql expression as a string and I can choose exactly how to run it. I do avoid the ORM part as I like to be in total control of the transaction, and I am often in the python async world, where ORM don't play nicely.


> I hate dealing with the various escaping characters for SQL

I don't understand that bit, with something like Slonik you don't use escape characters, it does it transparently for you by taking advantage of JavaScript's string template literals.


    // updates the whole table, ignoring conditions
    db.Model(<table>).Where(<conditions>).Updates(<values>)
    // updates the rows matching conditions
    db.Model(<table>).Updates(<values>).Where(<conditions>)
This is a massive footgun in Gorm. Don't use tools that help you make catastrophic errors.


Yeah wow. I don't understand why any ORM would ever let you do that. I don't think I've ever heard of a case where someone actually wanted to update every row in a table from application code.

IMO, any decent ORM should block that entirely and force you to drop to SQL if you actually want to do it, or make you call some obnoxiously named function, like:

db.Model(<table>).YesIReallyDoWantToUpdateEveryRowOnTheTable().Update(<values>)


Having written a non-ORM Go-Postgres tool [1] similar to sqlc, I'm a big fan of this article, especially their acknowledgment that using SQL moves the eng culture towards data-centric engineering. Some thoughts:

- Application code should not rely on database table structure (like the ActiveRecord pattern). Modeling the database as a bunch of queries is a better bet since you can change the underlying table structure but keep the same query semantics to allow for database refactoring.

- Database code and queries should be defined in SQL. I'm not a fan of defining the database in another programming language that generates DDL for you since it's another layer of abstraction that usually leaks heavily.

- One of the main drawbacks of writing queries in plain SQL is that dynamic queries are more difficult to write. I haven't found that to be too much of a problem since you can use multiple queries or push some of the dynamic parts into a SQL predicate. Some things are easier with an ORM, like dynamic ordering or dynamic group-by clauses.

[1]: https://github.com/jschaf/pggen

Similar comment from a few months ago comparing Go approaches to SQL: https://news.ycombinator.com/item?id=28463938


say your application has 120 tables. Do you write out 120 "INSERT" statements, 120 "UPDATE" statements, 120 "DELETE" statements as raw strings or otherwise every time you need to do any DML ? if not, and instead you have something that dynamically writes these out given some kind of object state to persist, you are using an ORM.

similarly, when you transfer the states of structures and/or objects to those INSERT/UPDATE/DELETE statements, or the rows from the cursor execution of a SELECT statement into object/structural state, that is also using an ORM.

I tend to consider "a query builder that writes SELECT statements" to be the least important thing an "ORM" can do, and is not even necessary to still have a library defined as an ORM. The "mapping" is mostly to do with automating the movement of data between the objects / database, which is, the relational tuples mapped to object attributes.


> Do you write out 120 "INSERT" statements, 120 "UPDATE" statements, 120 "DELETE" statements as raw strings

Yes. For example: https://github.com/jschaf/pggen/blob/main/example/erp/order/....

> that is also using an ORM

ORM as a term covers a wide swathe of usage. In the smallest definition, an ORM converts DB tuples to Go structs. In common usage, most folks use ORM to mean a generic query builder plus the type conversion from tuples to structs. For other usages, I prefer the Patterns of Enterprise Application Architecture terms [1] like data-mapper, active record, and table-data gateway.

[1]: https://martinfowler.com/eaaCatalog/



We will never be able to separate this fine argument from the fact that gorm has been terribly immature for quite some time. ORMs take a long time to grow. Look at Arel and ActiveRecord. Sufficiently mature ORMs provide addordances to work around the fundamental N+1 and abstraction issues that always crop up, but they take time, dedication, and a broad community. Go is an interesting case in which there are enough useful applications that don't comprise "web apps" that we might not be in at the point of ORM maturity. I've used both gorm and go pg, and have definitely encountered frustrating limitations coming from both ends of the problem.


In NodeJS Prisma is really good for just building statically typed CRUD-like queries. It gives you result types inferred from the schema so you're less likely to make mistakes.

Would still use real SQL for serious stuff, but in my experience most queries are CRUD anyway so I don't see lightweight ORMs as a problem as long as they're not doing magic caching in the background and lazy fetching. Let me decide how I want to do that.


Using MikroORM in NodeJS and works wonders. Has the "raw" option for queries I want to handle myself, or even the "knex" builder for something in between


Hmm... Looks like a "active record" pattern (like Hibernate). This is exactly what I don't want. Essentially I only use ORMs as query builders. No magic behinds the scenes.


Author said it isn't[0] a year ago but not sure. I think it allows a couple of strategies to be used.

[0] https://github.com/mikro-orm/mikro-orm/issues/403


Never understood the hate for ORMs. I need to map from my data storage to my domain model somehow, why write all that code myself


I never understood the love for them.

I recently posted this about my experience in migrating to PostgreSQL without anything of the sort of ORM... https://henvic.dev/posts/go-postgres/

We achieved a really great result doing so. In comparison, at my previous workplace, I was forced to use ORM, and everything was slower, awful, and hard to maintain for no benefit whatsoever. I'm not saying that it's 100% fault of ORMs, but at least 60%, for sure it was.


The love comes when you're a single maintainer at a standard line-of-business app that manages a few hundred tables.

Most apps don't need to be performant. Most business logic is object-centric and relation-centric. Leveraging the best features of highly-integrated frameworks like Django and Rails mean that the framework has to understand and talk to objects, not database rows.

I have maintained several Django projects with a couple hundred tables by myself or another team member. I cannot imagine maintaining such a gigantic heap of equivalent SQL for trivial business logic.


Nobody "loves" them. It's one of many tools just like generic, OOP, functional, imperative, "nullsafe" features.

Long list of tools with usefulness.

I find people who tend to develop extreme on either end aren't that pragmatic.

I read your blog and I don't see any problems caused by ORM. Your product data model should fit in ORM (I get it, you're just using it as a simple example). The blog only mentioned "I used ORM in the past and had bad experience" without explaining the problems.

> In comparison, at my previous workplace, I was forced to use ORM, and everything was slower, awful, and hard to maintain for no benefit whatsoever

That seems to be the problem: your previous workplace, not the ORM itself.


The hate isn't in the mapping. It's perfectly reasonable to abstract away mapping of relational data to objects.

It's the translation of objects to SQL queries that's the problem. ORMs typically output inefficient and mostly obfuscated SQL, when a human can do a better job of hand-writing the SQL, knowing the data usage patterns needed.


It's also the problem of leaky abstractions. Now I need to know a lot about SQL, as well as the ORM, and in particular how the ORM maps to my conceptual model of the underlying SQL.

The cognitive load (with some more than others) of this can be extreme. I'm thinking of Hibernate I'm particular.

I think there's a good underlying drive for this: type safe, composable, declarative language-centric queries. Especially helpful with bulk data management. I think if SQL was a more natural mapping to this model, it wouldn't be so bad, but SQL was not created for programmatic interaction (it was created to be hand written), and can be challenging to create abstractions around.

These days I'll lean on simple ORM-light tools, but quickly prefer an escape hatch when things get even a little complex. Mapping the output is something that I'm happy to delegate to a framework though.

Some tools can even provide compile time type checking of your queries, I think that's getting somewhere more interesting.


Speaking for Typescript and Mikro-ORM. I use the ORM extensively for regular operations and queries, but usually for searches with more complex queries, I do it by hand (get PKs), and then load the entities by PK (I could skip that but it's to let the ORM handle the mapping).

ORMs generally have a "raw" escape route where you can just do it yourself? Or maybe it's just the one I use..


It's not just yours, it's common to most (virtually all?) ORMs, which makes these ideological arguments somewhat moot.

Just use an ORM for mapping and for simple queries and hand crank the complex queries.


ORMs could (and most do) provide some escape hatch, where you can write the query yourself and reuse the hydration layer, or reuse the query generator and customize the hydrator, or a combination. Or you can just bail out completely for the few performance critical queries.

Honestly, ORMs are just an abstraction. They come at a cost and they’re not a silver bullet, just like most abstractions. I believe the hate for ORMs in many cases is due to a lack of understanding/wrong expectations.


Because relational databases are very good at (surprise!) relations. So in SQL you can query data from several joined tables, query only certain columns, aggregate columns etc. etc. And all of these things are made needlessly complicated by using ORM. Even with the best ORM, you still have to learn an arcane new syntax for stuff you could write in 30 seconds when using SQL...


Right, but for most of that stuff I can just write a one-off query, munge it into the ORM's objects and pass that off to my view for rendering. But probably 80% of my queries are "get this object by ID" or "get these objects and some related ones by a simple condition" or maybe a slightly complex condition. In these cases an ORM works wonders and has almost zero downsides


ORM has escape hatch so you can write SQL for the complex queries (and hopefully don't open yourself to SQL injection along the process) and still benefit from ORM for the simple to moderate case.

> So in SQL you can query data from several joined tables

Can do that in your ORM too.

> query only certain columns

So far no issue with ORM


> Never understood the hate for ORMs. I need to map from my data storage to my domain model somehow, why write all that code myself

This is addressed in the article the title of this one refers to, "The Vietnam of Computer Science" [1]. I really recommend reading it in order to understand why the "obvious" solution is not always the best fit, but the very short summary is: due to the Object-Relational Impedance Mismatch [2], using an ORM can give you headaches and unwanted surprises.

[1] http://blogs.tedneward.com/post/the-vietnam-of-computer-scie...

[2] https://en.wikipedia.org/wiki/Object%E2%80%93relational_impe...


Went to most of the article, and it seemed to boil down to this kind of reflection:

> But as time progresses, it’s only natural that a well-trained object-oriented developer will seek to leverage inheritance in the object system, and seek ways to do the same in the relational model.

Basically, things are not perfect, and people try to blindly apply layers upon layers, thus getting deeper and deeper into a quagmire.

But the proposed solution of applying no automated layer doesn’t compute. If he can restrain himself from using any ORM at all, why can’t he restrain himself from adding inheritance in ORM models or other “let’s OOP this to death” blind approach ?

This feels like the “I sometimes get drunk so I’ll build a whole support structure to stop me from drinking”, and at no point someone steps in to say “just be moderate and it will be fine”


Maybe the original author never read Chomsky.

https://chomsky.info/198210__/


Thanks for posting it. I knew Chomsky's position (that the US involvement in Vietnam was actually a war against South Vietnam, with North Vietnam as the excuse) but had never taken the time to actually read this interview in detail.

Yes, this makes the whole ORM analogy fall apart, because it's based on a misconception. But I think the analogy still works when using the popular understanding of that war as the comparison.


It's no wonder this article was written in 2006. One of the problems describes DBAs! Has anyone heard of a DBA in the last decade?

Most of these other problems are just... solved? Any decent language will let you override the meaning of equality between two objects, if you don't use a ton of inheritance the inheritance problem won't bite you, good libraries let you describe the schema once and either generate SQL or generate classes, etc.


Two years ago I worked at a company with a DBA team. In my current job, there are no database experts and people don't understand things like indexes.

In my experience, most developers don't understand databases (of any kind).

So I'd say none of the problems from that article are really solved.


ORMs are different from SQL generators; an eg: typesafe sql generator can be a great thing; the problem is mapping sql results (wWhich are basically rows of data) to Objects. Objects have identity, but an SQL row can be anything, even if most of the time it's a full row from a table with a primary key.

As an example, if you do a query for Object o but with partial fields, and later you query for same object but full fields, are they the same object? SQL doesn't care, but in application code you need to ensure they are the "same", because objects will flow in between functions and if you eg: do o.setAge(30), you want both of the live objects to reflect that (for the same transaction).

It's a hard problem with lots of compromises or rules, and I think most people are better off just with a query api.


Is writing SQL query templates too much?

The only valid advantage of ORM is to prevent SQL injection, which can be solved with prepared statement.


Okay so I write a prepared statement, send it to my database with some parameters, and get back some rows. Now what? How do I turn those rows into structs or objects or whatever that I can pass around for business logic? Maybe I write a function that takes a row and maps it to an object, I call that for every row I get back, whatever. Oh but now I'm joining a belongs-to relationship and I want that joined row represented as an object, too, so I have to write a function to convert that row and oh what if the fields have conflicting names okay I can always prefix them with the name of the table and wait sometimes I'm not querying with this related object so I have to check if the related id is included in the row and... on and on and on for every possible way I could make a different query.

Great, now I've basically reinvented an ORM but it's worse because I have to manually maintain all this mapper code and it only ever accounts for the cases I've considered. If I make it fully generic, then I've really just reinvented an ORM.


There are two things that get bundled together as ORMs: mapping utilities and query generation. You can have one without the other - Dapper (.NET) and JDBI (Java) are examples of mapping code without generation.

The fact that you see mapping libraries without the query generation shows what's fundamentally broken about query generation. Namely, that mapping is a function of a result set. Going from a result set to an object is generally fairly straightforward. Going from an object structure to a result set (with the query merely being the declarative representation of that result set) is problematic.

The hatred comes from the query generation and is, I think, sensible. The time lost attempting to trick an ORM into generate performant SQL (on top of the time taken to set it up and learn its query language on top of SQL) is a poor trade. Frankly, if it came down to it, I think I could write the mapping code manually and come out ahead over dealing with the headaches of query generation.


My compiler cannot type check SQL code, so I need to run SQL to check it correctness. It's the same problem as with JavaScript, Python, Perl, shell, etc.

For JavaScript, the problem is solved (partially) with TypeScript.


I feel like the biggest problem with an ORM is how people try and use them. It’s like an all or nothing approach. You hit a wall where using an ORM for that sucks and so now using an ORM for everything sucks.

I never would have thought of using an ORM in golang, it doesn’t seem like that kind of language. Even an ORM in python is debatable.


I use them for the really simple stuff, where I basically know what the query to be generated is. What I don’t understand though, what is wrong with writing a sprocket or sql function and invoking it as a method from your oo layer? That gets to 98% of my usage and it’s fine.


Some small points:

> The ORM was a natural culprit: it’s easy to see that many ORMs will make use of object introspection to build SQL statements, hydrate results, or both. Gorm’s memory footprint is extreme although sadly not uncommon. Bridge started with Python on the backend and we used Django’s ORM to interact with the database which had similar problems.

I believe these are both active record ORMs, which are not as good for various reasons (including default memory usage) as good data mappers such as SQLAlchemy and Hibernate.

As for where enums etc are "mastered" - this is still a decision to make with or without ORMs.

I think the main tricky bit of ORMs is that objects are trees and relations are graphs; this causes problems. Not so much anything about encapsulation etc.


Am I the only one who doesn't see this as a binary choice? I've written SQL for decades and I still like ORMs for most scenarios because:

1. I always check the SQL being generated

2. I profile the SQL being generated to ensure that the indices are being used optimally

3. I only fetch what I need and try to reduce trips to database

4. Most of the time, this is good enough and helps reduce boilerplate.

5. Sometimes, after doing the above, I see that the ORM isn't upto the task - then I just drop to SQL and get exactly what I want the way I want it. The ORM doesn't block me from doing so.


Heh.

Recently I had to stand up a quick elixir project and decided to write all queries by hand and not use Ecto.

It was extremely enjoyable writing every query from the start. Just thinking carefully about what columns I needed, and crafting the best joins and where clauses made efficiency baked in from the beginning.

If one is not careful and just be lazy with an ORM you get back all columns all the time - and this is the biggest sin of an ORM. Not to mention sub-optimal where clauses that kill performance.

And the next positive side effect was the amazing speed of the database interactions in the app.

It’s not something I’d do all the time since sometimes you do need the discipline of Ecto, but I say we should write more sql by hand instead of less.


If you’re not careful and just be lazy with your hand crafted queries you’ll miss updating old queries when you alter your schema. You won’t convert your types the same way in each places you query the data, or you’ll forget checking your data’s consistency where it’s falling in the cracks between your SQL rules and the application’s requirements.

The “not careful and just be lazy” way won’t get you in a good place whatever approach you take.


The PG Driver through the Ecto Repo nicely handles and converts all the database types into Elixir types handily. Integers go to Ints and JSON B gots into Maps as one would expect.


Ecto is pretty damn nice (also technically it's not an ORM - and that's not a trivial distinction). Yeah, it adds a little bit of latency, but that will usually be overshadowed by the DB transactions... Am I mistaken about this?

Are you not afraid that your queries will have SQL injections? Or handle unicode poorly, binary blobs, etc?

> you get back all columns all the time

I think this is why you can map multiple schemas to the same table in Ecto


Ummm we use the PG driver through the Ecto Repo which handles SQL injections.


The problem is that writing SQL by hand tends to led to more errors. Experienced devs might be able to minimize those errors (and potential vulnerabilities, remember all those injection bugs?) but especially for junior devs I would recommend using an ORM to avoid shooting themselves in the foot. The question is Performance <=> Security, which is most likely gonna be security for most people on the data/ application layer.


For some data e.g. users I use two Ecto schemas for the same table, one with the full data and one with a small subset (id, username, full name). This makes it really easy to load just the data that's truly needed in associations etc.


There’s two reasons a problem stays unsolved. Perhaps three if you believe Jim Highsmith (we are managing paradoxes). One, it’s unsolvable, like the Halting Problem. Two, it’s an XY problem and nobody is asking the right question.

I’d been thinking recently that you could build a stupid simple ORM if you set up your SQL results to have the exact same names as your target object/struct and fed the type in as a parameter. Push these results into this box.

But that’s not the functionality that sells ORMs. It’s the object graph. Mapping relational data to graphs, ORM or not, is quite messy. Maybe that says we should be using graph databases for this sort of data instead.


We had many problems using ORM's in production:

- performance issues (DB calls in a loop the worst offender)

- various obscure bugs due to the concept of "flushing" (for example, we just deleted an entity, but it's still retrievable from its repository in the same transaction, which breaks logic)

- additional cognitive load because you have to learn ORM's concepts and how it maps to both DB and your domain model

- you have to either add ORM-specific attributes directly to domain objects (which is leakage of infrastructure details to the domain layer) or write tons of functions which convert ORM models to your domain entities and back

So in new code we now use raw SQL and we don't find it all that hard - it's easy to maintain and doesn't take a lot of time. It's problematic with complex aggregates, but our models tend to be anemic.

There's the hypothetical "what if you'll want to change the DB" but all DB calls are encapsulated in the infrastructure layer, so we'd only need to rewrite repository implementations. And you can migrate to a new DB gradually, microservice by microservice, and a typical microservice won't have more than ~20 repositories, so it's pretty manageable.


The actually story here is having the hubris to think you can build a good ORM (in Golang no less, lol) with a handful of engineers at a small company that's not even a technology company. How did this guy become a CTO again? Complete failure of leadership.

Mature ORMs (that still often suck) are built by literally thousands of open-source contributors.


It’s a good article. At Namely we use Dapper which is a “micro-ORM” developed by the folks at StackOverflow for .NET. I really like it because it gives us the mapping part without the sql code generation part which I think is the thing that ultimately gets you in trouble, and it works really well with Postgres and SQL Server.


Yes. The key is designing to the tool’s strength. Build a row oriented database model in the application language that matches the DB schema. This is trivial for the ORM (like Dapper or ActiveRecord) and you save boiler plate. For simple apps this might be enough. For richer domain models it is often simpler to map to the domain model from the database model mapping in application code than fighting the ORM to do it just right from DB to domain model. Another alternative is to design to tools and design your domain model to reduce the friction from the ORM. These trade-offs are specific to each application.


Personally, i think that both proponents and opponents of ORMs have a number of good points; personally, i also rather enjoyed the idea about using ORMs with generated code (e.g. schema first, a bit of model driven development) for the simpler things and then dropping down to SQL for non-trivial queries. Essentially knowing both SQL and what your ORM provides and using both for the use cases where they excel.

That said, why does no one almost ever mention ORMs that allow you to generate SQL code dynamically and then map it to objects? The best and perhaps only example that i can think of is myBatis XML mapping, which to me felt like a really powerful, versatile and actually easy to understand approach: https://www.tutorialspoint.com/mybatis/mybatis_mapper_xml.ht...

That way you get the power of SQL, the ability to customize your queries at runtime as needed (e.g. filtering criteria or ordering, or pagination, or even selecting column values from different tables if need be) while also getting good performance (since it's just SQL queries which you can optimize as needed) with a pretty decent mapping layer.

The biggest drawbacks that i can think of personally would be not being able to work with your DB directly if you don't have query logging on and can't get the full query out of the app when you need it, for debugging. Of course, doing that is pretty easy, however i hate how most ORMs out there don't let you log SQL queries with parameter examples (not just some special character for where the parameter value is supposed to be) in case you ever want to copy those into your SQL shell/tool of choice to run it. Of course, that's more of an inconvenience, rather than a deal breaker.

Have a look at some of the example dynamic SQL here: https://www.tutorialspoint.com/mybatis/mybatis_dynamic_sql.h...

Why don't more stacks strive for something like that?

(sidenote: they also have docs on GitHub, but GitHub is down at the moment, what a day)


I avoided this Vietnam completely. It was clear to me from the beginning that data model optimal for relational storage and the one optimal for internal application state (I usually write stateful servers in C++) are 2 different things.


I did the exact opposite move - moving away from sqlc to gorm. Sqlc is great and fast and it was a joy to use, but I needed the ability to track changes to database models. So either I would build my own abstraction on top of sqlc or work with database triggers. Gorm provided an easy way to hook into all of that as well as making several update queries easier. I still think that sqlc should be the first choice when you don’t need more complex features, but gorm does provide a lot of goodies.


Meenwhile I am "traped at a fortune 500 enterprise" with a half baked project that I think would address all this, yet to affraid to drop that job due to "stability" (which COVID came to meke laughable) and also too afraid to make it publicly available for I think its not yet ready even to start reveiving feedback (even tho I have been able to start usong it gor simple cases) and draming of the day I will be able to spend all my working hours on it...


SQLAlchemy's core is a great option to use for building dynamic queries. I wonder if we could use python and go to interface with each other for building queries.

go -> python-sqlalchemy-build-query -> run query from go and get result.

https://docs.sqlalchemy.org/en/14/core/


Shameless plug: I'm working with a small team on a data framework that replaces the ORM with an abstraction the eliminates the impedance mismatch. You can take a look at https://github.com/zeidon/zeidon-joe . I'd be happy to answer questions.


I use this pattern in Django and I feel without ORM or query builder this will need 4 raw queries.

``` results = Foo.objects()... if a: results = results.filter(a=a) if b: results = results.filter(a=b) ```


    select * from foo 
    where case 
           when :filter_on_a and not :filter_on_b then a=:a 
           when :filter_on_b and not :filter_on_a then b=:b
           when :filter_on_a and :filter_on_b then a=:a and b=:b
           else true
          end
Should preserve all your semantics. I can't remember if Python DBAPI handles named parameters, so let's pretend it does and the colon-prefixed parameters are in the right syntax.

cur.execute("above query", filter_on_a=bool(a), filter_on_b=bool(b), a=a, b=b)

Whether the more complex SQL is worth it really depends on your use case.


Took me a while to realize that ORMs are fine as long as you don't use them for the "R".


how are they doing migrations?


I tried to use ORMs in Java years ago, but realized that while it makes simple things simpler, sooner or later you will need to do something that the ORM can't do simply. Or you will bang your head against the fact that you are trying to marry object graphs to relational models and you'll burn a lot of CPU, IO and memory in the translation. I have to admit I've been against ORMs ever since. You end up having a weak grasp on what is happening under the bonnet.

You can make much better use of your database, and understand what is going on a lot better, if you let the database be the database.

My approach is to forget about the database, and focus on what the application needs from the persistence layer. Then design an interface for the persistence. Then I implement that interface for each kind of persistence backend I need (SQL, files, memory, etc) while maintaining a single set of unit tests and benchmarks that are written against the interface. This isn't new. A lot of applications and libraries do this. And I think it is a good way to design things.

Usually the database I use to develop a SQL schema is Sqlite3, since it allows for really nice testing. Then I add PostgreSQL support (which requires more involved testing setup, but I have a library that makes this somewhat easier: https://github.com/borud/drydock). (SQLite being in C is a bit of a problem since it means I can't get a purely statically linked binary on all platforms - at least I haven't found a way to do that except on Linux. So if anyone has some opinions on alternatives in pure Go, I'm all ears)

In the Java days JDBC every single method implementing some operation would be a lot of boilerplate. JDBC wasn't a very good API. But in Go that is much less of a problem. In part because you have struct tags, and libraries like Sqlx. To that I also add some helper functions to deal with result/error combos. Turns out the majority of my interactions with SQL databases can be carried out in 1-3 lines of code - with a surprising number of cases just being a oneliner. (The performance hit from using Sqlx is in most cases so minimal it doesn't matter. If it matters to you: use Sqlx when modeling and evolving the persistence, and then optimize it out if you must. I think I've done that just once in about 100kLOC worth of code written over the last few years).

And best of all: I get to deal with the database as a database. I write SQL DDL statements to define the schema, and SQL to perform the transactions. I don't have to pretend it is a object model, so I can make full use of the SQL. (Well, actually, I try to make do as far as possible with trivial SQL, but that's a whole different discussion). The interface type takes care of exposing the persistence in a way that fits the application.

(Another thing I've started experimenting with a bit is to return channels or objects containing channels instead of arrays of things. But there is still some experimenting that needs to be done to find a pleasing design)


In Java there's another popular alternative to ORM/Hibernate, JOOQ (https://www.jooq.org/). Which is basically type-safe SQL.

The idea is to * start from an existing relational schema

* run a CLI tool to generate Java code that represents all the tables/fields,

* write queries in the application code that look like ".select().from(AUTHOR).join(BOOK).on(BOOK.AUTHOR_ID.eq(AUTHOR.ID)).fetch();"

So you don't need to mess up with strings anymore or learn any DSLs. Your queries look exactly like regular SQL and so are very readable. This approach is well aligned with schema migrations represented as SQL scripts too. But not everyone is comfortable with using code generation.


SQL is actually pretty okay for writing SQL. I was never in love with this rather roundabout way of doing things since it didn’t solve problems I was actually having.


This article would have been better without the unnecessary war analogy and history lesson.


Indeed the analogy has not aged well. It's in bad taste, and somewhat offensive, to hear the term "Vietnam of programming".


Yeah, as a Vietnamese, this title definitely feels offensive to me. Maybe this article is written with American audience in mind. In Vietnam we call the war American war btw. Depends on where you are from, the title could have used the term "USA of programming" :)


Why did it not age well? Are there any new findings regarding Vietnam?


To most people in the world, "Vietnam" is not a war, and is not merely a symbol of absurdly misconceived incompetence.


The funny thing is that it references Vietnam, while the Afghanistan exit happened just a few months ago.


The first time as tragedy, the second as farce...

As expected the linked site [0] gets the history wrong, [EDIT:] blaming JFK for getting USA into a conflict that Eisenhower had stoked over both his terms. (McGovern and Buford were shot down in early 1954.) One of the great things about assassinating your own commander-in-chief is you get to blame all your own screw-ups on him. For a while at least...

[0] http://blogs.tedneward.com/post/the-vietnam-of-computer-scie...


i thought we'd all agreed 15 yrs ago to not use vietnam analogy to discuss orms.


We have a new, shiny Afghanistan analogy to use!


The article this title refers to [1] is very insightful, and I don't think it was ever improved by something better (analogy-wise, or more descriptive).

[1] http://blogs.tedneward.com/post/the-vietnam-of-computer-scie...


thats the article i was referring to.

i remember a lot of consternation about that analogy

followp

http://blogs.tedneward.com/post/thoughts-on-vietnam-commenta...


Well, in my mind Ted Neward's thoughts in that followup explain why it's still an illustrative analogy.


I am uncomfortable with the analogy. I understand it, but there are other analogies we could use (some are worse. People used to use the term "tar baby" to describe this kind of situation, and I find that term rather disturbing, as well).


The saddest part of the analogy is how US-centric it is. "Vietnam" is a boondoggle to exit from, not a home that's suffering through a bloody conflict.


I think you agreed with yourself while thinking aloud in empty room 15 yrs ago.


What sort of ghoul compares a programming issue with something like the conflict in Vietnam? When he started writing about 15 million people dying you think that would have made him stop and rethink the metaphor.


The last reference to Vietnam that I saw in the MSM was that it had done a good job of handling COVID-19. Maybe the article is describing a good use of an ORM.


I think that ship eventually left harbor. :( Scanning their charts, July and August spiked heavy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: