I accidentally removed the WHERE clause from my SQL query in a personal tool. Every row is now the same. I overwrote 206,000+ rows. I have no backup, I am stupid.

drekly@lemmy.world to Programming@programming.dev – 356 points –

"UPDATE table_name SET w = $1, x = $2, z = $4 WHERE y = $3 RETURNING *",

does not do the same as

"UPDATE table_name SET w = $1, x = $2, y = $3, z = $4 RETURNING *",

It's 2 am and my mind blanked out the WHERE, and just wanted the numbers neatly in order of 1234.

idiot.

FML.

113

This is a hard lesson to learn. From now on, my guess is you will have dozens of backups.

And a development environment. And not touch production without running the exact code at least once and being well slept.

Fuck that, get shit housed and still do it right. That's a pro.

That's not pro, that's just reckless gambling.

Totally right! You must set yourself up so a fool can run in prod and produce the expected result. Which is the purpose of a test env.

Replied hastily, but the way to run db statements in prod while dealing with sleep deprivation and drinking too much is to run it a bunch in several test env scenarios so you're just copy pasting to prod and it CAN confidently be done. Also enable transactions and determine several, valid smoke tests.

Edit: a -> several

And always use a transaction so you're required to commit to make it permanent. See an unexpected result? Rollback.

Transactions aren't backups. You can just as easily commit before fully realizing it. Backups, backups, backups.

Yes, but

  1. Begin transaction
  2. Update table set x='oopsie'
  3. Sees 42096 rows affected
  4. Rollback

Can prevent a restore, whereas doing the update with auto commit guarantees a restore on (mostly) every error you make

Can prevent a restore, whereas doing the update with auto commit guarantees a restore on (mostly) every error you make

Exactly. Restores often result in system downtime and may take hours and involve lots of people. The backup might not have the latest data either, and restoring to a single table you screwed up may not be feasible or come with risk of inconsistent data being loaded. Even if you just created the backup before your statement, what about the transaction coming in while you're working and after you realize your error? Can you restore without impacting those?

You want to avoid all of that if possible. If you're mucking with data that you'll have to restore if you mess up, production or not, you should be working with an open transaction. As you said... if you see an unexpected number of rows updated, easy to rollback. And you can run queries after you've modified the data to confirm your table contains data as you expect now. Something surprising... rollback and re-think what you're doing. Better to never touch a backup and not shoot yourself in the foot and your data in the face all due to a stupid, easily preventable mistake.

Backups are for emergencies.

Transactions are for oopsies.

I've read something like "there are two kinds of people: those who backup and those who are about to"

This doesnโ€™t help you but may help others. I always run my updates and deletes as selects first, validate the results are what I want including their number and then change the select to delete, update, whatever

I learned this one very early on in my career as a physical security engineer working with access control databases. You only do it to one customer ever. ๐Ÿคทโ€โ™‚๏ธ

Pro tip: transactions are your friend

Completely agree, transactions are amazing for this kind of thing. In a previous team we also had a policy of always pairing if you need to do any db surgery in prod so you have a second pair of eyes + rubber duck to explain what you're doing.

Postgres has a useful extension, pg_safeupdate

https://github.com/eradman/pg-safeupdate

It helps reduce these possibilities by requiring a where clause for updates or deletes.
I guess if you get into a habit of adding where 1=1 to the end of your SQL, it kind of defeats the purpose.

MySQL (and by extension, MariaDB) has an even better option:

mysql --i-am-a-dummy

Amazing! These are going in my.conf ASAP.

All (doesn't seem like MsSQL supports it, I thought that's a pretty basic feature) databases have special configuration that warn or throw error when you try to UPDATE or DELETE without WHERE. Use it.

I tried to find this setting for postgres and Ms SQLserver, the two databases I interact with. I wasn't able to find any settings to that effect, do you happen to know them?

for postgres and Ms SQLserver

It's not really a SQL Language feature, more an IDE feature. So to tell you where the settings are, we'd have to know which IDE you're using.

For example, in DataGrip (which I think you can use both for postgres and MSSQL), there's "Show warning before running potentially unsafe queries"

If you forgot to put the WHERE clause in DELETE and UPDATE statements, DataGrip displays a notification to remind you about that. If you omitted the WHERE clause intentionally, you can execute current statements as you planned.

That would be SQL management studio and psql on the command line.

The best I could find was some plugins for SQL management studio (ssmsboost) and disable automatic commits for psql.

I didn't mean this as IDE thing, there is an extension to postgres and server configuration for mysql/mardiadb. Posted the links above

--i-am-a-dummy ๐Ÿ˜‚

I didnโ€™t mean this as IDE thing

Well, the link you've posted is specifically for MySQL CLI Client - Maybe I should have I said "Client" instead of "IDE" - but if he uses a different IDE/Client besides MySQL-CLI it's probably a different setting

You're not the first. You won't be the last. I'm just glad my DB of choice uses transactions by default, so I can see "rows updated: 3,258,123" and back the fuck out of it.

I genuinely believe that UPDATE and DELETE without a WHERE clause should be considered a syntax error. If you want to do all rows for some reason, it should have been something like UPDATE table SET field=value ALL.

Because I'm relatively new at this type of thing, how does that appear on the front end? I'm using a js/html front end and a jsnode backend. Would I just see a popup before I make any changes?

No idea. My tools connect directly to the DB server, rather than going though any web server shenanigans.

If you're asking about the information about the number of rows, oracle db clients do that. For nodejs, oracle's library will provide this number in the response to a dml statement execution. So you can retrieve it in your backend code. You have to write additional code to bring this message to the front-end.

https://oracle.github.io/node-oracledb/

Awesome, thanks for the info. Definitely super useful for debug mode whilst I'm fixing and tampering!

this folks, is why you don't raw dog sql like some caveman

Me only know caveman. Not have big brain only smooth brain

Yep. If you're in a situation where you have to write SQL on the fly in prod, you have already failed.

Me doing it for multiple years in a Bank....Uhm...

(let's just say I am not outting my money near them... and not just because of that but other things...)

Tell that to my former employer...

Yeah, I swear it's part of the culture at some places. At my first full-time job, my boss dropped the production database the week before I started. They lost at least a day of records because of it and he spent most of the first day telling me why writing sql in prod was bad.

But the adrenaline man... some of us are jonkies of adrenaline but we are too afraid of anything more of physically dangerous...

You may be interested in suicide linux then. it's a distro that wipes your entire hard drive if you mistype a command

Raw dog is the fastest way to finish a task.

  • productivity
  • risk

It's a trade-off

There's no way you're endorsing the way OP handled their data right?

No, but people are sometimes forced to do these things because of pressure from management and/or lack of infrastructure to do it in any other way.

Definitely don't endorse it but I have done it. Think of a "Everything is down" situation that can be fixed in 1 minute with SQL.

Always SELECT first. No exceptions.

Better yet... Always use a transaction when trying new SQL/doing manual steps and have backups.

mind explaining?

By running a select query first, you get a nice list of the rows you are going to change. If the list is the entire set, you'll likely notice.

If it looks good, you run the update query using the same where clause.

But that's for manual changes. OP's update statement looks like it might be generated from code, in which case this wouldn't have helped.

I did when I made the query a year ago. Dumdum sleep deprived brain thought it would look more organised this way

I once dropped a table in a production database.

I never should have had write permissions on that database. You can bet they changed that when clinicians had to redo four days of work because the hosting company or whatever only had weekly backups, not daily.

So, I feel your pain.

1 more...

There is still the journal you could use to recover the old state of your database. I assume you commited after your update query, thus you would need to copy first the journal, remove the updates from it, and reconstruct the db from the altered journal.

This might be harder than what I'm saying and heavily depends on which db you used, but if it was a transactional one it has to have a journal (not sure about nosql ones).

It is after the event that I find that postgres' WAL journalling is off by default ๐Ÿ™ƒ

You all run queries against production from your local? Insanity.

The distinctions get blurry if youโ€™re the sole user.

My only education is a super helpful guy from Reddit who taught me the basics of setting up a back end with nodejs and postgres. After that it's just been me, the references and stack overflow.

I have NO education about actual practises and protocol. This was just a tool I made to make my work easier and faster, which I check in and update every few months to make it better.

I just open vscode, run node server.js to get started, and within server.js is a direct link to my database using the SQL above. It works, has worked for a year or two, and I don't know any other way I should be working. Happy to learn though!

(but of course this has set me back so much it would have been quicker not to make the tool at all)

With that amount of instruction you've done well

There's probably lots of stuff you don't even know you don't know.

Automated testing is a big part of professional software development, for example, and helps you catch things like this issue before they go live.

I'm up to 537 lines of server code, 2278 lines in my script, and 226 in my API interfacing, I'm actually super proud of it haha.

But you're totally right, there are things I read that I just have no clue what they even mean or if I should know it, and probably use all the wrong terminology. I feel like I should probably go back to the start and find a course to teach me properly. I've probably learned so many bad habits. It doesn't help that I learned JS before ES6 so I need to force myself not to use var and force myself to understand and use arrow functions.

I absolutely know that the way I've written the program will make some people cringe, but I don't know any better. There are a few sections where I'm like "would that actually be what a real, commercial web app would do, or have I convoluted everything?"

For example, the entire thing is just one 129-line html file. I just hide and unhide divs when I need a new page or anything gets changed. I'm assuming that's a bad thing, but it works, it looks good, and I don't know any better!

Have a look at an ORM, if you are indeed executing plain SQL like Iโ€™m assuming from your comment. Sequelize might be nice to start with. What it does is create a layer between your application and your database. Using which, you can define the way a database object looks (like a class) and execute functions on that. For instance, if youโ€™re creating a library, you could do book.update(), library.addBook(), etc. Since it adds a layer in between, it also helps you prevent common vulnerabilities such as SQL injection. This is because you arenโ€™t writing the SQL queries in the first place. If you want to know more, let me know.

Thanks, I'll look into it! I'm interested in why you got downvoted though! ๐Ÿ˜…

I didn't downvote but some people like ideologically dislike orms. The reasons I've heard are usually "I can write better SQL by hand", "I don't want to use/learn another library", "it has some limitations"

Those things can be true. Writing better SQL by hand definitely is a big "it depends", though.

I can see why people might dislike them. Adds some bloat perhaps. But at the same time, I like the idea that my input is definitely sanitised since the ORM was written by people who know what theyโ€™re doing. Thatโ€™s not to say it wonโ€™t have any vulnerabilities at all, but the chance of them existing is a lot lower than when I write the queries by hand. A lapse of judgement is all it takes. Even more relevant for beginning developers who might not be aware of such vulnerabilities existing.

For a personal tool that runs locally I can handle some bloat in the name of safety!

Mostly just safety from yourself/your own little errors in input, but it canโ€™t hurt for sure! Input sanitation is mostly relevant to fend off script kiddies. Relevant xkcd

Short story, haters gonna hate ยฏ\_(ใƒ„)_/ยฏ Long story, see my comment to the commenter below you. :)

Everyone has a production system. Some may even have a separate testing environment!

1 more...

Periodic, versioned backups are the ultimate defense against bugs.

Periodic, versioned and tested backups.

It absolutely, totally, never ever happened to me that I had a bunch of backups available that turned out to be effectively unrestorable the moment I needed them. ๐Ÿ˜ญ

The worse feeling than realizing you don't have backup is realizing your backup archives are useless.

Or like that time gitlab found out that none of its 5 backup/replications worked and lost 6 hours of data.

who thought it was a good idea to make the where condition in SQL syntax only correct after the set?? disaster waiting to happen

The people designing SQL, not having learned from the mistakes of COBOL, thought that having the syntax as close to English as possible will make it more readable.

There was a time I worked as a third party for one of the 10 most accessed websites in my country. I got assigned to a project that has been maintained by another third party for 10+ years with no oversight. I have many stories from there but today's is that this company had very strict access control to the production database.

Third parties couldn't access the database directly. All fine and good, except for the fact that we were expected to setup some stuff in the database pretty much every week. The dude who kept this project running for the previous decade, unable to get proper credentials to do his job, instead created an input box in some system that allowed him to run any sql code.

You can already guess the rest of the story, right? For security reasons we had to do things in the least secure way imaginable. Eventually, wheres were forgotten.

I watched someone make this mistake during a screen share, she hit execute and I screamed "wait! You forgot the where!" Fortunately, it was such a huge database that SQL spun for a moment I guess deciding how it was going to do it before actually doing it, she was able to cancel it and ran a couple checks to confirm it hadn't actually changed anything yet. I don't think anything computer related has ever gotten my adrenaline going like that before or since

I did that once when I moved from one DB IDE to another and didn't realise the new one only ran the highlighted part of the query.

there were thousands of medical students going through a long process to find placements with doctors and we had a database and custom state machine to move them through the stages of application and approval.

a bug meant a student had been moved to the wrong state. so I used a snippet of SQL to reset that one student, and as a nervous habit highlighted parts of the query as I reread them to be sure it was correct.

then hit run with the first half highlighted, without the where clause, so everyone in the entire database got moved to the wrong fucking state.

we had 24 hourly backups but I did it late in the evening, and because it was a couple of days before the hard deadline for the students to get their placements done hundreds of students had been updating information that day.

I spent until 4am the next day working out ways to imply what state everyone was in by which other fields had been updated to what, and incidentally found the original bug in the process ๐Ÿ˜’

anyway, I hope you feel better soon buddy. it sucks but it happens, and not just to you. good luck.

In MSSQL, you can do a BEGIN TRAN before your UPDATE statement.

Then if the number of affected rows is not about what you'd expect, doing a ROLLBACK would undo the changes.

If the number of affected rows did look about right, doing a COMMIT would make the changes permanent.

Yup, exact tip I was gonna write!

I have them commented out and highlight the COMMIT when I'm ready.

Thanks for sharing your painfull lesson. I don't directly query DB's a lot, but WHEN I do in the future I'll select first and validate.. Such things can happen so fast. No self hate, man!

I have done this too. Shit happens.

One of my co-workers used to write UPDATE statements backwards limit then where etc, to prevent this stuff, feels like a bit of a faff to me.

I always write it as a select, before turning it into a delete or update. I have burned myself too often already.

^ this is a very good tip that ive been using myself too

Oh I did that like a year ago.

And then last night had an error that led me back near this code and stupidly thought "hey it'd look neater if those numbers were in order"

I learned the hard way about the beauty of backups and the 3, 2, 1 rule. And snapshots are the GOAT.

Even large and (supposedly) sophisticated teams can make this mistake, so dont feel bad. Itโ€™s all part of learning and growth. You have learned the lesson in a very real and visceral way - it will stick with you forever.

Example - a very large customer running our product across multiple servers, talking back to a large central (and shared) DB server. DB server shat itself. They called us up to see if we had any logs that could be used to reconstruct our part of their database server, because it turned out they had no backups. Had to say no.

you could use dbeaver that warns you for update and delete queries without a where clause, independently of the db system. I hope the functionality it's still there since, for totally unrelated motivations, I always use a where clause, even when buying groceries.

Depending on the database used, the data might still be there, just really hard to recover (as in, its presence is a side-effect, not the intent).

https://stackoverflow.com/a/12472582 takes a look at Postgres case, for example.

Oof. Been there, done that, 0 stars; would not recommend.

Pressing F to pay respects. R.I.P. in pieces

Depending on how mission critical your data is...Set up delayed replicas and backups (and test that your backups can actually be restored from). Get a second pair of eyeballs on your query. Set up test environments and run it there before running it in production. The more automated testing you put into your pipeline, the better. Every edit should be committed and tested. (Kubernetes and GitLab Auto DevOps makes this kind of thing a cinch, every branch has a new test environment set up automatically)

Don't beat yourself up too much though. It happens even to seasoned pros.

This is about the one thing where SQL is a badly designed language, and you should use a frontend that forces you to write your queries in the order (table, filter, columns) for consistency.

UPDATE table_name WHERE y = $3 SET w = $1, x = $2, z = $4 RETURNING *
FROM table_name SELECT w, x, y, z

I get mildly mad all the time when writing SQL because I feel like it's upside down

Instead of

select u.id. u.email, p.name
from user u
join persona p on p.user_id = u.id
where u.active = true

where the columns are referenced before they're defined (like what is u.id? Keep reading to find out!)

it should instead be

from user u
join persona p on u.id = p.user_id
where u.active = true
select u.id, u.email, p.name

Now nothing is defined before it's used, and you're less likely to miss your where clause. I usually write the joins first anyway because I know what tables I care about, but don't know which specific things I'll want.

I can't think of any other languages that use things before they're defined except extremely dysfunctional JavaScript.

Things like this make me glad I can only query my db.

If it's Microsoft SQL you should be able to replay the transaction log. But you should be doing something like daily full backups and hourly incremental or differential backups to avoid this situation in the first place.

SQL scouts credo: I will never use indexes, I will always use column names.

Unrelated, but use placeholders instead of interpolation right into the query.

See: Little Bobby Tables. https://xkcd.com/327/

That's what they're doing...

If true, great. I've not run across a language / RDBMs library that uses numbered place holders over the standard ?, but I'm sure someone's done it.

I know it's too late to be helpful now, but I always write the WHERE first, because you are not the first person to have done this...

Iโ€˜m using DataGrip (IntelliJ) for any manual SQL tomfoolery. I have been where you are. Luckily for me, the tool asks for additional confirmation when doing any update/delete without where clause.

Also, backups are a must, for all the right reasons and for any project.

Been there, done that, I hope you have a recent backup!