Tech workers - what did your IT Security team do that made your life hell and had no practical benefit?

Krudler@lemmy.world to Ask Lemmy@lemmy.world – 394 points –

One chestnut from my history in lottery game development:

While our security staff was incredibly tight and did a generally good job, oftentimes levels of paranoia were off the charts.

Once they went around hot gluing shut all of the "unnecessary" USB ports in our PCs under the premise of mitigating data theft via thumb drive, while ignoring that we were all Internet-connected and VPNs are a thing, also that every machine had a RW optical drive.

244

Often times you’ll find that the crazy things IT does are forced on them from higher ups that don’t know shit.

A common case of this is requiring password changes every x days, which is a practice that is known to actively make passwords worse.

Or it prompts people to just stick their "super secure password" with byzantine special character, numeral, and capital letter requirements to their monitor or under their keyboard, because they can't be arsed to remember what nonsensical piece of shit they had to come up with this month just to make the damn machine happy and allow them to do their jobs.

I do this in protest of asinine password change rules.

Nobody's gonna see it since my monitor is at home, but it's the principle of the thing.

A truly dedicated enough attacker can and will look in your window! Or do fancier things like enable cameras on devices you put near your monitor

Not saying it's likely, but writing passwords down is super unsafe

What you are describing is the equivalent of somebody breaking into your house so they can steal your house key.

No, they're breaking into your house to steal your work key. The LastPass breach was accomplished by hitting an employee's personal, out of date, Plex server and then using it to compromise their work from home computer. Targeting a highly privileged employees personal technology is absolutely something threat actors do.

The point is if they're going to get access to your PC it's not going to be to turn on a webcam to see a sticky note on your monitor bezel. They're gonna do other nefarious shit or keylog, etc.

Why keylog and pick up 10k random characters to sift through when the password they want is written down for them?

Again, how is the attacker going to see a piece of paper that is stuck to the side of the screen? This rule makes sense in high traffic areas, but in a private persons home? The attacker would also need to be a burglar.

It seems that some people are having trouble following the conversation and a basic stream of topical logic.

The initial premise was that somebody could see your passwords by pwning your machine... And using that to... Turn on webcam so they could steal your password so they could... pwn your machine?

Lol

Nope. The premise is they pwn ANOTHER, less secure, personal device and use the camera from the DIFFERENT device to pwn your work computer. For example, by silently installing Pegasus on some cocky "security is dumb lol" employee's 5 year out-of-date iphone via text message while they're sleeping, and use the camera from that phone to recon the password.

They probably wouldn't want the $3.50 that person has in their bank account, but ransoming corporate data pays bank, and wire transfering from a corporate account pays even better! If you're in a highly privileged position, or have access to execute financial transactions at a larger company, pwning a personal device isn't outside of the threat model.

Most likely that threat model doesn't apply to you, but perhaps at least put it under the keyboard out of plain sight?

3 more...
3 more...
5 more...
5 more...
5 more...
5 more...
5 more...
6 more...
6 more...

Your coworkers put their password under the keyboard ? Mine just leave a post it on the side of the monitor.

6 more...

The DOD was like this. And it wasn't just that you had to change passwords every so often but the requirements for those passwords were egregious but at the same time changing 1 number or letter was enough to pass the password requirements.

I'm in IT security and I'm fighting this battle. I want to lessen the burden of passwords and arbitrary rotation is terrible.

I've ran into a number of issues at my company that would give me the approval to reduce the frequency of expired passwords

  • the company gets asked this question by other customers "do you have a password policy for your staff?" (that somehow includes an expiration frequency).

  • on-prem AD password complexity has some nice parts built in vs some terrible parts with no granularity. It's a single check box in gpo that does way too much stuff. I'm also not going to write a custom password policy because I don't have the skill set to do it correctly when we're talking about AD, that's nightmare inducing. (Looking at specops to help and already using Azure AD password protection in passive mode)

  • I think management is worried that a phishing event happens on a person with a static password and then unfairly conflating that to my argument of "can we just do two things: increase password length by 2 and decrease expiration frequency by 30 days"

At the end of the day, some of us in IT security want to do the right things based in common sense but we get stymied by management decisions and precedence. Hell, I've brought NIST 800-63B documentation with me to check every reason why they wouldn't budge. It's just ingrained in them - meanwhile you look at the number of tickets for password help and password sharing violations that get reported ... /Sigh

At the end of the day, some of us in IT security want to do the right things based in common sense but we get stymied by management decisions and precedence. Hell, I’ve brought NIST 800-63B documentation with me to check every reason why they wouldn’t budge. It’s just ingrained in them - meanwhile you look at the number of tickets for password help and password sharing violations that get reported …

Paint the picture for management:

At one time surgery was the purview of medieval barbers. Yes, the same barbers that cut your hair. At the time there were procedures to intentionally cause people to bleed excessively and cutting holes the body to let the one of the "4 humors" out to make the patient well again. All of this humanity arrived at with tens of thousands of years of existence on Earth. Today we look at this as uninformed and barbaric. Yet we're doing the IT Security equivalent of those medieval barber still today. We're bleeding our users unnecessarily with complex frequent password rotation and other bad methods because that's what was the standard at one time. What's the modern medicine version of IT Security? NIST 800-63B is a good start. I'm happy to explain whats in there. Now, do we want to keep harming our users and wasting the company's money on poor efficiency or do we want to embrace the lesson learned from that bad past?

I feel this. I increased complexity and length, and reduced change frequency to 120d. It worked really well with the staggered rollout. Shared passwords went down significantly, password tickets went to almost none (there's always that 'one'). Everything points to this being the right thing and the fact that NIST supports this was a win... until the the IT audit. The auditor wrote "the password policy changed from 8-length, moderate complexity, 90-day change frequency to 12-length, high complexity, 120-day change frequency" and the board went apeshit. It wasn't an infraction or a "ding", it was only a note. The written policy was, of course, changed to match the GPO, so the note was for the next auditor to know of the change. The auditor even mentioned how he was impressed with the modernity of our policy and how it should lead to a better posture. I was forced to change it back, even though I got buyin from CTO for the change. BS.

Having been exposed to those kinds of audits before that's really just bad handling by the CTO and other higher ups!

That's super true, so many times to stay ISO compliant (I'm thinking about the lottery industry here), security policies need to align with other recommendations and best practices that are often insane.

But then there's a difference between those things which at least we can rationalize WHY they exist... and then there's gluing USB plugs shut because they read about it on slashdot and had a big paranoia. Lol

What I really love is mandatory length and character password policies so complex that together with such password change requirements that push people beyond what is humanly possible to memorize, so it all ends down written in post-its, the IT equivalent of having a spare key under a vase or the rug.

For our org, we are required to do this for our cybersecurity insurance plan

Tell them NIST now recommends against it so the insurance company is increasing your risks

The guideline is abundantly clear too with little room for interpretation:

5.1.1.1 Memorized Secret Authenticators

Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.

https://pages.nist.gov/800-63-3/sp800-63b.html

Forcing password expiration does cause people to make shittier passwords. But when their passwords are breached programitically or through social engineering They don't just sit around valid for years on the dark web waiting for someone to buy them up.

This requirement forces people who can’t otherwise remember passwords to fall into patterns like (kid’s name)(season)(year), this is a very common password pattern for people who have to change passwords every 90 days or so. Breaching the password would expose the pattern and make it easy enough to guess based off of.

99% of password theft currently comes from phishing. Most of the people that get fished don't have a freaking clue they got fished oh look the Microsoft site link didn't work.

Complex passwords that never change don't mean s*** when your users are willing to put them into a website.

It's still not in a freaking list that they can run a programmatic attack against. People that give this answer sound like a f****** broken record I swear.

Secops has been against this method of protection for many years now, I’d say you’re the outdated one here

Years ago phishing and 2fa breaches werent as pervasive. Since we can't all go to pass key right now, nobody's doing a damn thing about the phishing campaigns. Secops current method of protection is to pay companies that scan the dark web by the lists and offer up if your password's been owned for a fee.

That's a pretty s***** tactic to try to protect your users.

We're on the internet, you can say shit.

If your user is just using johnsmithfall2022 as their password and they update the season and year every time, it's pretty easy for hackers to identify that pattern and correct it. This is not the solution and it actively makes life worse for everyone involved.

Password crackers says you're wrong

No never minded people that think that all passwords are being cracked tell me I'm wrong. Lists emails and passwords grabbed from fishing attacks tell me the people that are too lazy to change their passwords and once in awhile don't deserve the security.

I'm a native English speaker. I can't understand your comment. I sense that you have a useful perspective, could you rephrase it so it's understandable?

NIST now recommends watching for suspicious activity and only force rotation when there's risk of compromise

Tell me, can your users identify suspicious activity cuz mine sure as hell can't.

That's why password leak detection services exists

(And a rare few of them yes)

And in my company the password change policies are very different from one system to another. Some force a change monthly, some every 28 days, some every 90 days, and thwn there is rhat one legacy system that no longer has a functioning password change mechanism, so we can't change passwords there if we wanted to.

And the different systems all want different password formats, have different re-use rules.

And, with all those uncoordinated passwords, they don't allow password managers to be used on corporate machines, despite the training materials that the company makes us re-do every year recommending password managers...

So glad we opted for a longer password length, with fewer arbitrary limits, and expiry only after 2 years or a suspected breach.

Even better is forcing changes every 30 or 60 days, and not allowing changes more than every week. Our users complain daily between those rules and the password requirements that they are too dumb to understand.

Password changes that frequent are shown to be ineffective, especially for the hassle. Complexity is a better protection method.

I'm aware. Apparently everyone who read my post has misread it. I'm saying that the requirements above are terrible, and they make my users complain constantly. Our security team constantly comes up with ways to increase security theater at the detriment of actual security.

6 more...

Banned open source software because of security concerns. For password management they require LastPass or that we write them down in a book that we keep on ourselves at all times. Worth noting that this policy change was a few months ago. After the giant breach.

And for extra absurdity: MFA via SMS only.

I wish I was making this up.

Banning open source because of security concerns is the opposite of what they should be doing if they care about security. You can't vet proprietary software.

It's not about security, it's about liability. You can't sue OSS to get shareholders off your back.

Care to elaborate "MFA via SMS only"? I'm not in tech and know MFA through text is widely used. Or do you mean alternatives like Microsoft Authenticator or YubiKey? Thanks!

Through a low tech social engineering attack referred to as SIM Jacking, an attacker can have your number moved to their SIM card, redirecting all SMS 2FA codes effectively making the whole thing useless as a security measure. Despite this, companies still implement it out of both laziness and to collect phone numbers (which is often why SMS MFA is forced)

To collect numbers, which they sell in bulk, to shadey organizations, that might SIM Jack you.

Sim swap is quite easy if you are convincing enough for support at an ISP doing phone plans.
Now imagine if I sim-swapped your 2FA codes :)

Exactly this. Instead you should use a phone app like Aegis or proprietary solutions like MS Authenticator to MFA your access because it's encrypted.

Thenks! I really don't want to be forced into an app, but it's good to know the reason why.

I tried so hard to steer my last company away from SMS MFA. CTO basically flat out said, "As long as I'm here SMS MFA will always be an option."

Alright, smarmy dumbass. I dream of the day when they get breached because of SMS.

If I remember it correctly, in GSM it's perfectly possibly to spoof a phone number to receive the SMS using the roaming part of the protocol.

The thing was designed to be decently safe, not to be highly secure.

Took away Admin rights, so everytime you wanted to install something or do something in general that requires higher privileges, we had to file a ticket in the helpdesk to get 10 minutes of Admin rights.

The review of your request took sometimes up 3 days. Fun times for a software developer.

We worked around this at my old job by getting VirtualBox installed on our PCs and just running CentOS or Ubuntu VMs to develop in. Developing on windows sucks unless you're doing .NET imo.

Developing on VMs also sucks, neverending network issues on platforms like Windows which have a shitty networking stack (try forwarding ports or using VPN connections).

In fact, Windows is just a shitty dev platform in general for non-Microsoft technologies but I get that you needed to go for the least shit option

Yeah fortunately we didn't need to do any port forwarding or anything complex for networking for developing locally. It was definitely much easier for us. I don't like Apple, but I didn't mind my other old job that gave us MacBooks honestly.

Oh shit, you just reminded me of the time that I had to PHONE Macromedia to manually activate software because of the firewalling. This was after waiting days to get administrative permission to install it in the first place.

"Thank you" for helping resurface those horrible memories!

I don't miss those days.

3 days? That's downright speedy!

I submitted a ticket that fell into a black hole. I have long since found an alternate solution, but am now keeping the ticket open for the sick fascination of seeing how long it takes to get a response. 47 days and counting...

Nobody wants to take it because it will mess up their KPIs.

Any ticketing system set up like that is just begging for abuse. If they don't have queue managers then the team should share the hit if they just leave the ticket untouched

During those 10 minutes of admin rights:

net user secretlocaladmin * /add
net localgroup administrators secretlocaladmin /add

There's likely a GPO cycling and removing all the admins.

This was my experience too. Shitty group policies messing with my local changes

We used Intune Portal for a list of approved desktop apps

Let me guess, the list is about 6 items long with no provision for getting any added

No, it was quite extensive (20-30?) and we (I) kept expanding it. I even added icons for each app so it looked nice.

All published software was approved by Cybersecurity. We allowed people to request apps and evaluated each case.

Fighting similar shit right now. I need admin rights frequently.

Over 150 Major Incidents in a single month.

Formerly, I was on the Major Incident Response team for a national insurance company. IT Security has always been in their own ivory tower in every company I've worked for. But this company IT Security department was about the worst case I've ever seen up until that time and since.

They refused to file changes, or discuss any type of change control with the rest of IT. I get that Change Management is a bitch for the most of IT, but if you want to avoid major outages, file a fucking Change record and follow the approval process. The security directors would get some hair brained idea in a meeting in the morning and assign one of their barely competent techs to implement it that afternoon. They'd bring down what ever system they were fucking with. Then my team had to spend hours, usually after business hours, figuring out why a system, which had not seen a change control in two weeks, suddenly stopped working. Would security send someone to the MI meeting? Of course not. What would happen is, we would call the IT Security response team and ask if anything changed on their end. Suddenly 20 minutes later everything was back up and running. With the MI team not doing anything. We would try to talk to security and ask what they changed. They answered "nothing" every god damn time.

They got their asses handed to them when they brought down a billing system which brought in over $10 Billion (yes with a "B") a year and people could not pay their bills. That outage went straight to the CIO and even the CEO sat in on that call. All of the sudden there was a hard change freeze for a month and security was required to file changes in the common IT record system, which was ServiceNow at the time.

We went from 150 major outages (defined as having financial, or reputation impact to the company) in a single month to 4 or 5.

Fuck IT Security. It's a very important part of of every IT Department, but it is almost always filled with the most narcissistic incompetent asshats of the entire industry.

Jesus Christ I never thought id be happy to have a change control process

Lots of safety measures really suck. But they generally get implemented because the alternative is far worse.

At my current company all changes have to happen via GitHub PR and commit because we use GitOps (ex: ArgoCD with Kubernetes). Any changes you do manually are immediately overwritten when ArgoCD notices the config drift.

This makes development more annoying sometimes but I'm so damn glad when I can immediately look at GitHub for an audit trail and source of truth.

It wasn't InfoSec in this case but I had an annoying tech lead that would merge to main without telling people, so anytime something broke I had his GitHub activity bookmarked and could rule that out first.

You can also lock down the repo to require approvals before merge into main branch to avoid this.

Since we were on the platform team we were all GitHub admins 😩. So it all relied on trust. Is there a way to block even admins?

Hm can't say. I'm using bitbucket and it does block admins, though they all have the ability to go into settings and remove the approval requirement. No one does though because then the bad devs would be able to get changes in without reviews.

That sounds like a good idea. I'll take another look at GitHub settings. Thanks!

The past several years I have been working more as a process engineer than a technical one. I've worked in Problem Management, Change Management, and currently in Incident for a major defense contractor (yes, you've heard of it). So I've been on both sides. Documenting an incident is a PITA. File a Change record to restart a server that is in an otherwise healthy cluster? You're kidding, right? What the hell is a "Problem" record and why do I need to mess with it?

All things I've heard and even thought over the years. What it comes down to, the difference between a Mom and Pop operation, that has limited scalability and a full Enterprise Environment that can support a multi-billion dollar business... Is documentation. That's what those numb nuts in that Insurance Company were too stupid to understand.

You poor man. I've worked with those exact fukkin' bozos.

Lack of a Change Control process has nothing to do with IT Security except within the domain of Availability. Part of Security is ensuring IT systems are available and working.

You simply experienced working at an organization with poor enforcement of Change Control policies. That was a mistake of oversight, because with competent oversight anyone causing outages by making unapproved changes that cause an outage would be reprimanded and instructed to follow policy properly.

Set the automatic timeout for admin accounts to 15 minutes....meaning that process that may take an hour or so you have to wiggle the mouse or it logs out ..not locks.... logs out

From installs to copying log files, to moving data to reassigning owner of data to the service account.

And that's why people use mouse jigglers and keep their computers unlocked 24/7.

Mine was removed by Corporate IT, along with a bunch of other open source stuff that made my life bearable.

Also I spent 5 months with our cyber security guys to try and provide a simple file replication server for my team working in a remote office with shit internet connectivity. I gave up, the spooks put up a solid defense, push all the onerous IT security compliance checking onto my desk instead of taking control.

Not as bad as my previous company though, outsourced IT support to ATOS was a nightmare.

It's reasonably easy to make a hardware mouse wiggler with an Arduino Micro (and I don't mean something that physically moves a mouse, rather something that looks like a USB mouse to the computer and periodically sends mouse movement messages).

If you're desperate enough, look it up as it's quite simple so there should be step by step instructions out there.

Absolutely love my Uno keyboard for this https://keyhive.xyz/shop/uno-single-key-keyboard

Got like 6 commands on a single key and one of them is to press shift every 30seconds so my computer doesn’t lock. Lifesaver.

Yeah, it's surprisingly simple to get these microcontrollers to become essentially programmable keyboard/mouse emulators, by which point if you're familiar with the stuff to program them (Arduino being the simplest and most widespread framework) it really just becomes a coding task and you can get it to do crazy stuff.

I suggested an Arduino Micro board because it bypasses the whole hardware side of the problem, but something like what you mention is even simpler.

I used a Sidewinder keyboard for years with programmable macros.

Yeah, I had my password as a macro.

Dick move on my part as the macro, I'm fairly sure, is stored in plaintext on the PC. But the convenience was great. I don't do that any more.

Can also just buy one from Amazon if you’re lazy or not technically inclined.

Well, my off the cuff suggestion was what seems simple to me in this domain ;)

That said I get what you mean and agree.

That's why you buy a jiggler that you place your mouse onto. Not detectable by IT :)

I set my pocket knife on the ctrl key when I have to step away.

Ahhh the old "level up an RPG Skill by jamming a pen cap into a key and going to watch Night Court reruns" method.

Thanks, I actually didn't know holding CTRL would keep the system awake!

Does that keep your status in Teams as "online"? That's what I use the jiggler for - if I'm waiting for CI tests which take 30+ minutes and I sit in front of the laptop, I don't want to have to manually jiggle my mouse every couple of minutes just to keep my status.

That works?

Idk about every application but it keeps windows from timing out which serves most purposes for me.

After mine was disabled, I found that if I run videos of old meetings or training onscreen, it keeps the system alive...

Works nicely when I'm WFH.

The internal IT at that hellhole is a nightmare as well.

There is no compliance item I am aware of that has that requirement, some CISO needs to learn to read.

Misunderstood STIG from the sound of it. The STIG is only applicable to unprivileged users but tends to get applied to all workstations regardless of user privileges. Also I think the .mil STIG GPOs apply it to all workstations regardless of privileges.

The other thing that tends to get overlooked is that AC-12 let's you set it to whatever the heck you want. Ao you could theoretically set it to 99999 year by policy if you wanted.

https://www.stigviewer.com/stig/application_security_and_development/2017-01-09/finding/V-69243

One IT security team insisted we have separate source code repositories for production and development environments.

I’m honestly not sure how they thought that would work.

That's fucking bananas.

In my job, the only difference between prod/dev is a single environmental file. Two repositories would literally serve no purpose and if anything, double the chances of having the source code be stolen.

That was the only difference for us as well. The CI/CD process built container images. Only difference between dev, test, and prod was the environment variables passed to the container.

At first I asked the clueless security analyst to explain how that improves security, which he couldn’t. Then I asked him how testing against one repository and deploying from another wouldn’t invalidate the results of the testing done by the QA team, but he kept insisting we needed it to check some box. I asked about the source of the policy and still no explanation, at least not one that made any sense.

Security analyst escalated it to his (thankfully not clueless) boss who promptly gave our process a pass and pointed out to Mr security analyst that literally nobody does that.

I’m honestly not sure how they thought that would work.

Just manually copy-paste everything. That never goes wrong, right?

I mean, it's what the Security guys do, right? Just copy+paste everything, mandate that everyone else does it too, Management won't argue because it's for "security" reasons.

Then the Security guys will sit around jerking each other off about how much more secure they made the system

Could work if dev was upstream from prod. But honestly there would be no difference between that and branches.

Maybe it is a rights issue. Preventing a prod build agent of sorts to access develop code.

Yep doing that now. Not sustainable in the slightest. Im glad im not in charge of that system.

Not my IT department (I am my IT department): One of the manufacturers for a brand of equipment we sell has a "Dealer Resource Center," which consists solely of a web page where you can download the official product photography and user's manuals, etc. for their products. This is to enable you to list their products on your e-commerce web site, or whatever.

Apparently whoever they subcontracted this to got their hands on a copy of Front End Dev For Dummies, and in order to use this you must create a mandatory account with minimum password complexity requirements, and solve a CAPTCHA every time you log in. They also require you to change your password every 60 days, and if you don't they lock your account and you have to call their tech support.

Three major problems with this:

  1. There is no verification check that you are actually an authorized dealer of this brand of product, so any fool who finds this on Google and comes up with an email address can just create an account and away you go downloading whatever you want. If you've been locked out of your account and don't feel like picking up the telephone -- no problem! Just create a new one.

  2. There is no personalized content on this service. Everyone sees the same content, and it's not like there's a way to purchase anything on here or anyway, and your "account" stores no identifying information about you or your dealership that you feel like giving it other than your email address. You are free to fill it out with a fake name if you like; no one checks. You could create an account using obvioushacker@pwned.ru and no one would notice.

  3. Every single scrap of content on this site is identical to the images and .pdf downloads already available on the manufacturer's public web site. There is no privileged or secure content hosted in this "Resource Center" whatsoever. The pictures aren't higher res or anything. Even the file names are the same. It's obviously hooked up to the same backend as the manufacturer's public web site. So if there were such a thing as a "bad actor" who wanted to obtain a complete library of glamor shots of durable goods, for some reason, there's nothing stopping them from scraping the public web site and coming up with literally exactly the same thing.

It's baffling.

1 more...

I had to run experiments that generate a lot of data (think hundreds of megabytes per minute). Our laptops had very little internal storage. I wasn't allowed to use an external drive, or my own NAS, or the company share - instead they said "can't you just delete the older experiments?"... Sure, why would I need the experiment data I'm generating? Might as well /dev/null it!

Oh hey I was living this a few months ago!

Mozilla products banned by IT because they had a vulnerability in a pervious version.

::: spoiler Rant It was so bullshit. I had Mozilla Firefox 115.1 installed, and Mozilla put out an advisory, like they do all the fucking time. Fujitsu made it out to be some huge huge unfixed bug the very next day in an email after the advisory was posted and the email chain basically said "yk, we should just remove all Firefox. It's vulnerable so it must be removed."

I wouldn't be mad if they decided that they didn't want to have it be a managed app or that there was something (actually) wrong with it or literally anything else than the fact that they didn't bother actually reading either fucking advisory and decided to nuke something I use daily. :::

Nah mate, they were completely right. What if you install an older version, and keep using it maliciously? Oh wait, now that you mention, I'm totally sure Edge had a similar problem at one point in the past. So refrain from using Edge, too. Or Explorer. And while we're at it, it's best to stay away from Chrome, as well. That had a similar vulnerability before, I'm sure. So let's dish that, along with Opera, Safari, Maxthon and Netscape Navigator. Just use Lynx, it's super lightweight!

EDIT: on another thought, you should just have stopped working for the above reason. Nothing is safe anymore.

Hasn't made life hell, but the general dumb following of compliance has left me baffled:

  • users must not be able to have a crontab. Crontab for users disabled.
  • compliance says nothing about systemd timers, so these work just fine 🤦

I've raised it with security and they just shrugged it off. Wankers.

Thats really funny. Made my day thanks.

Are they super old school and not know about systemd? Or are they doing something out of compliance that they may hate too? I have so many questions.

I actually think they're new school enough where Linux to them means a lot less than it does to us. And so they don't feel at home on a Linux machine and, unfortunately, don't care to learn.

I could totally be wrong, though. Maybe I'm the moron.

I dont think your the moron. Thats super strange. I can only think it might be some sort of standard that they had to comply with....or whatever.

We cant run scripts on our work laptop because of domain policy. Thing is, I am a software developer. They also do not allow docker without some heavy approval process, nor VMs. So im just sitting here remoting into a machine for development...which is fine but the machine is super slow. Also their VPN keeps going down, so all the software developers have to reconnect periodically all at the same time.

At my prior jobs, it was all open so it was very easy to install the tools we needed or get approval fairly quickly. Its more frustrating than anything. At least they give us software development work marked months out.

I cannot remember the specifics because it's going back almost 15 years now but at one point...crontab (edit and other various vital tools) was disabled by policy.

To get necessary processes/cleanup done at night, I used a scheduled task on a Windows PC to run a BAT that opened a macro program which opened a remote shell and "typed" the commands.

Fuuuuuuck.

I hate this stuff. When I had a more devops role I would just VM everything. Developers need their tools, here is a VM with root. Do what you want and backups run on Friday.

My dev pc isn't allowed to be connected to the internet :D

Yep you have it the worst. Shut down the thread.

Wait, I haven't even started talking about the fact it's a huge unstructured legacy project using SharePoint 2016 and....

Where did everyone go?

Thought my work was bad. We at least can use VMs. I literally can't do my job without one, Rockwell being what it is. Companies don't like upgrading PLC software so I need to use old versions of windows occasionally to run old Rockwell stuff.

There was also a bug for a bit that would brick win11 PCs when trying to update PLC firmware, fun stuff.

Same boat. I use dedicated laptops. This is for my old Rockwell stuff, this is for my old Siemens stuff, this is my normal laptop with AD stuff, this one for Idec, and the last one for Schneider. Pretty much every laptop at the company gets retired it becomes mine.

Also works for on site access. Customer needs support? Mail them a laptop. I got one laptop that has been in Canada, both coastlines in America, Australia, and Vietnam.

I had a software developer job where they expected me to write code in Microsoft notepad, put it on a USB, and then plug it into airgapped computers to test it. Wasn't allowed to even use notepad++.

Oh it felt so freaken good leaving that job after 6 weeks. It felt even better when I used my old manager's personal phone number on a fake grinder profile I made. She kept a tally of my bathroom breaks.

Jump systems are a good practice but they gotta have the resources you need.. I hate to say it but it sounds like y'all need to just move to a cloud platform..

ZScaler. It's supposedly a security tool meant to keep me from going to bad websites. The problem is that I'm a developer and the "bad website" definition is overly broad.

For example, they've been threatening to block PHP.Net for being malicious in some way. (They refuse to say how.) Now, I know a lot of people like to joke about PHP, but if you need to develop with it, PHP.Net is a great resource to see what function does what. They're planning on blocking the reference part as well as the software downloads.

I've also been learning Spring Boot for development as it's our standard tool. Except, I can't build a new application. Why not? Doing so requires VSCode downloading some resources and - you guessed it - ZScaler blocks this!

They've "increased security" so much that I can't do my job unless ZScaler is temporarily disabled.

Also, zScaler breaks SSL. Every single piece of network traffic is open for them to read. Anyone who introduces zscaler should be fired and/or shot on sight. It's garbage at best and extremely dangerous at worst.

Zscaler being the middleman is somewhat the point for security/IT teams using that feature.

And it's a horrible point. You're opening up your entire external network traffic to a third party, whose infrastructure isn't even deployed or controllable in any form by you.

The idea being that it's similar to using other enterprise solutions, many of which do the same things now.

Zscaler does have lesser settings too, at it's most basic it can do split tunneling for internal services at an enterprise level and easy user management. Which is a huge plus.

I'd also like to point out that the entire Internet is a third party you have no control over which you open your external traffic to everyday.

The bigger deal would be the internal network, which is also a valid argument.

I’d also like to point out that the entire Internet is a third party you have no control over which you open your external traffic to everyday.

Not really. Proper TLS enables relatively secure E2E encryption, not perfect, but pretty good. Adding Zscaler means, that my entire outgoing traffic runs over one point. So one single incident in one single provider basically opens up all of my communication. And given that so many large orgs are customers of ZScaler, this company pretty much has a target on its back.

Additionally: I'm in Germany. My Company does a lot of contracting and communication with local, state and federal entities, a large part of that is not super secret, but definitely not public either. And now suddenly an Amercian company, that is legally required to hand over all data to NSA, CIA, FBI, etc. has access to (again) all of my external communication. That's a disaster. And quite possibly pretty illegal.

Yeah. Zscaler was once blocking me from accessing the Cherwell ticket system, which made me unable to write a ticket that Zscaler blocked me access to Cherwell.

Took me a while to get an IT guy to fix it without a ticket.

Oh man our security team is trialing zscaler and netskope right now. I've been sitting in the meetings and it seems like it's just cloud based global protect. GP was really solid so this worries me

It has the same problem as any kind of TLS interception/ traffic monitoring tool.

It just breaks everything and causes a lot of lost time and productivity firstly trying to configure everything to trust a new cert (plenty of apps refuse to use the system cert store) and secondly opening tickets with IT just to go to any useful site on the internet.

Thankfully, at least in my case, it's trivial to disable so it's the first thing I do when my computer restarts.

Security doesn't seem to do any checks about what processes are actually running, so they think they've done a good job and I can continue to do my job

It's been ages since I had to deal with the daily random road blocks of ZScaler, but I do think of it from time to time.

Then I play Since U Been Gone by Kelly Clarkson.

1 more...

Removed admin access for all developers without warning and without a means for us to install software. We got access back in the form of a secondary admin account a few days later, it was just annoying until then.

I had the same problem once. Every time I needed to be an admin, I had to send an email to an outsourced guy in another country, and wait one hour for an answer with a temporary password.

With WSL and Linux, I needed to be admin 3 or 4 times per day. I CCed my boss for every request. When he saw that I was waiting and doing nothing for 4 hours every day, he sent them an angry email and I got my admin account back.

The stupid restriction was meant for managers and sales people who didn’t need an admin account. It was annoying for developers.

I worked at a big name health insurance company that did the same. You would have to give them an email, wait a week, then give them a call to get them to do anything. You could not install anything yourself, it was always a person that remote into your computer. After a month, I still didn't have visual studio installed when they wanted me to work on some .Net. Then they installed the wrong version of Visual Studio. So the whole process had to be restarted.

I got a new job within 3 months and just noped out.

Local admin of your interactive account is just. Ad though.

Locked down our USB ports. We work on network equipment that we have to use the USB port to log in to locally.

One place I worked at did this but had bluetooth on no issues. People brought all kinds of things to the office.

Admin access needed to change the clock, which was wrong. Missed a train because of that.

Worked a job where I had to be a Linux admin for a variety of VMs. To access them, I needed an VPN that only worked inside the company LAN, and blocked internet access. it was a 30 day trial license on day 700somthing, so it had a max 5 simultaneous connection limit. Access was from my heavily locked down laptop. Windows 7 with 5 minutes locking Screensaver. The ssh software was an unknown brand, "ssh.exe" which only allowed one connection at a time in a 80 x 24 console window with no ability to copy and paste. This went to a bastion host, an HPUx box on an old csh shell with no write access to your home directory due to a 1.4mb disk quota per user. Only one login per user, ten login max, and the bastion host was the only way to connect to the Linux VMs. Default 5 minute logout for inactivity. No ssh keys allowed. No scripting allowed, was like typing over 9600 baud.

I quit that job. When asked why, I told them I was a Linux administrator and the job was not allowing me to administrate. I was told "a poor carpenter always blames his tools." Yeah, fuck you.

A carpenter isn't expected to use his tools with garbage grabbers (reachy claw things) either.

That sounds like the equivalent of asking a carpenter to build a wooden boat large enough to carry 30 people, but only giving them Fisherprice tools and foam blocks.

2 more...

Here in Portugal the IT guys at the National Health Service recently blocked access to the Medical Doctor's Union website from inside the national health service intranet.

The doctors are currently refusing to work any more overtime than the annual mandatory maximum of 150h so there are all sorts of problems in the national health service at the moment, mainly with hospitals having to close down emergency services to walk-in patients (this being AskLemmy, I'll refrain from diving into the politics of it) so the whole things smells of something more than a mere mistake.

Anyways, this has got to be one of the dumbest abuses of firewalling "dangerous" websites I've seen in a long while.

They set zscaler so that if I don't access an internal service for an unknown number of months, it means I don't need it "for my daily work", so they block it. If I want to access it again I need to open a ticket. There is no way to know what they closed and when they'll close something.

In 1 months since this policy is active, I already have opened tickets to access test databases, k8s control plane, quality control dashboards, tableau server...

I really cannot comment how wrong it is.

Zscaler is one of the worst products I've had the displeasure to interact with. They implemented it at my old job and it said that my home Internet connection was insecure to connect to the VPN. Cyber Sec guys couldn't figure out the issue because the logs were SO helpful.

Took working with their support to find that it has somehow identified my nonstandard address spacing on my LAN to be insecure for some reason.

I kept my work laptop on a separate vlan for obvious reasons.

Pretty sure it's some misapplied heuristics for previously identified bad clients, but that should only trigger an alert (with details!) in most cases and not block you if it's not also paired with any known malicious activity

I'm going off memory from early 2021. But it was my private IP on the laptop using a Class B private address according to their support team. I was flabbergasted. Maybe they just expected every remote worker to use Class C or something. Who knows?

I am not allowed to change my wallpaper.

This came from your security team? I usually see it from HR / management selling it as a branding issue or "professional" thing.

Even worse here - we cant change the screensaver or screen lockout timeout settings!

I have a workaround by running a little looping script that keep the screen active. Its not that I particularly object to the screensaver, but once it activates I have to Ctrl Alt Delete 3-4 times and enter my password to get my desktop open again. Also it is an active screensaver that sometimes mucks up my desktop layout (I have a multiple monitor setup)

That is so annoying.. when I'm working from home I just start a meeting with myself in Teams to keep the pc from autolocking.

That's actually genius. Here's me writing a script to just move the mouse randomly lol, starting a Teams meeting would've been way simpler

3 more...

Blocked the OWASP web site because it was categorized as “hacking materials”.

My favorite filter was "distasteful," for a sysadmin forum page or reddit thread that had what I hoped would be relevant information.

We have a largeish number of systems that IT declared catheorically could not connect directly to the Internet for any reason.

So guess what systems weren't getting updates. Also guess what systems got overwhelmed by ransomware that hit what would have been a patched vulnerability, that came through someone's laptop that was allowed to connect to the Internet.

My department was fine, because we broke the rules to get updates.

So did network team admit the flaw in their strategy? No, they declared a USB key must have been the culprit and they literally went into every room and confiscated all USB keys and threw them away, with quarterly audits to make sure no USB keys appear. The systems are still not being updated and laptops with Internet connection are still indirectly bridging them.

Wait, why don't they use patch management software? If they allow computers with Internet access to connect to them, why not a patch management server?

They do. In fact they mandate IT assets to have three competing patch management software on them. They mandate disabling any auto updates because they have to vet them first. My official laptop hasn't been pushed an update in 8 months.

Do y'all need a consultant? That is so bad it's a non starter.

Ironically, we actually have a Segment of our business that provides IT for other companies, and they do a decent job, but they aren't allowed to manage our own IT. Best guess is that they are too expensive to waste on our own IT needs. If an IT staffember accidentally shows competence, they are probably moved to the billable group.

Also, I keep a "rogue" laptop to self administrate along with my official it laptop to show I am in compliance. Updates are disabled and are only allowed to be fine y by IT. I just checked and they haven't pushed any updates for about 8 months.

1 more...

Password rotation.

As a security guy - as soon as I can get federal auditors to agree, I'm getting rid of password expiration.

The main problem is they don't audit with logic. It's a script and a feeling. No password expiration FEELS less secure. Nevermind the literal years of data and research. Drives me nuts.

Cite NIST SP 800-63B.

Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.

https://pages.nist.gov/800-63-3/sp800-63b.html

I've successfully used it to tell auditors to fuck off about password rotation in the healthcare space.

Now, to be in compliance with NIST guidelines, you do also need to require MFA. This document is what federal guidelines are based on, which is why you're starting to see Federal gov websites require MFA for access.

Either way, I'd highly encourage everyone to give the full document a read through. Not enough people are aware of it and this revision was shockingly reasonable when it came out a year or two ago.

It's counterintuitive. Drives people to use less secure passwords that they're likely to reuse or to just increment; Password1, Password2, etc.

Also complex and random requirements for passwords

Oh man. Huge company I used to work for had:

  • two separate Okta instances. It was a coin toss as to which one you'd need for any given service

  • oh, and a third internally developed federated login service for other stuff

  • 90 day expiry for all of the above passwords

  • two different corporate IM systems, again coin toss depending on what team you're working with

  • nannyware everywhere. Open Performance Monitor and watch network activity spike anytime you move your mouse or hit a key

  • an internally developed secure document system used by an international division that we were instructed to never ever use. We were told by IT that it "does something to the PC at a hardware level if you install the reader and open a document" which would cause a PC to be banned from the network until we get it replaced. Sounds hyperbolic, but plausible given the rest of the mess.

  • required a mobile authenticator app for some of the above services, yet the company expected that us grunts use our personal devices for this purpose.

  • all of the above and more, yet we were encouraged to use any cloud hosted password manager of our choosing.

I'll.go one further with authenticator. Mobile phones were banned in the data center and other certain locations (financial services). Had to set up landline phone....but to do that needed to request it...approve it on my phone then enter data center security door run and answer the phone line with 60s like something in the matrix.

Worked at a medium sized retail startup as a software engineer where we didn't have root access to our local laptops, under the guise of "if you fuck it up we won't be able to fix it" but we only started out with a basic MacBook setup. so every time I wanted to install a tool, ide, or VM I had to make a ticket to IT to come and log in with the password and explain what I was doing.

Eventually, the engineering dept bribed an IT guy to just give us the password and started using it. IT MGMT got pissed when the number of tickets dropped dramatically and realized what was going on.

We eventually came to the compromise that they gave us sudo access with the warning "we're not backing anything up. If you mess up we'll have to factory reset the whole machine". Nobody ever had to factory reboot their machine because we weren't children... And if there was an issue we just fixed it ourselves

Imagine that. IT knowing how to fix the issues they caused. What a revolutionary thought! /s

A long time ago in a galaxy far away (before the internet was a normal thing to have) I provided over-the-phone support for a large and complex piece of software.

So, people would call up and you had to describe how they could do the thing they needed to do, and if that failed they would have to wait a few days until you went to the site to sort it in person.

The software we supported was not on the approved list for the company I worked for, so you couldn't use it within the building where the phones were being answered.

I'm absolutely shocked that a company had a software whitelist before the widespread adoption of the internet. Ahead of their time in implementing, and fucking up, software whitelisting!

It was for government owned computers, they didn't want any pirated or virus-infected stuff, and at that point there was no way to lock down such a mish-mash of systems.

The software company (who also do things like run prisons these days) had given permission for us to run the software and given a set of fake data so we could go through the motions when talking people through things, but apparently that wasn't enough to get it on the list.

There was a server I inherited from colleagues who resigned, mostly static HTML serving. I would occasionally do a apt update && apt ugrade to keep nginx and so updated and installed certbot because IT told me that this static HTML site should be served via HTTPS, fair enough.

Then I went on parental leave and someone blocked all outgoing internet access from the server. Now certbot can't renew the certificate and I can't run apt. Then I got a ticket to update nginx and they told me to use SSH to copy the files needed.

They are sort of right but have implemented it terribly. Serving out a static webpage is pretty low on the "things that are exploitable" but it's still an entry point into the network (unless this is all internal then this gets a bit silly). What you need to do is get IT to set up a proxy and run apt/certbot through that proxy. It defends against some basic reverse shell techniques and gives you better control over the webhosts traffic. Even better would be to put a WAP and a basic load balancer in front of the webhost, AND proxy external communications.

Blocking updates/security services is dogshit though and usually is done by people that are a bit slow on the uptake. Basically they have completely missed the point of blocking external comms and created a way more massive risk in the process... They either need to politely corrected or shamed mercilessly if that doesn't work.

Good luck though! I'm just glad I'm not the one that has to deal with it.

The network has been subnetted into departments. Problem: I, from development, get calls from service about devices that have issues. Before the subnetting, they simply told me the serial number, and I let my army of diagnosis tools hit the unsuspecting device to get an idea what's up with it. Now they have to bring it over and set up all the attached devices here so I can run my tests.

Surely IT can make an exception for you or create a VM with multiple NICs for you.

Or configure a local port on the dev vlan... Sounds like a corporate environment where the many IT teams dont talk to each other, or network team are hiding out in a comms cupboard.

Ours is terrible for making security policy that will impact technical solution options in a vacuum with a few select higher level IT folks and no one sorts out the process to using the new "secure" way first. Ending up in finding out something you thought would be a day or 2 task ends up being a weeks long odyssey to define new processes and technical approaches. Or sometimes just out right abandoning the work because the headache isn't worth it.

Ours does this too. Except they stick to their guns and we end up having to just work around the new impediment they've created for months until it happens to inconvenience someone with enough pull to make them change it.

Endless approval processes are a good one. They don't even have to be nonsensical. Just unnecessarily manual, tedious, applied to the simplest changes, with long wait times and multiple steps. Add time zone differences and pile up many different ones, and life becomes hell.

It took them three weeks to have my super secure voicemail PIN reset, only for me to set it to whatever I wanted.

Made me write SQL updates that had to be run by someone in a different state with pretty much no knowledge of SQL.

Access to change production systems was limited to a single team, which was tasked with doing all deploys by hand, for an engineering organisation of 50+ people. Quickly becoming overloaded, they limited deploy frequency to five deploys per day, organisation-wide.

Bit of a shit-show, that one.

Disabled "unnecessary" services on all member servers including netlogon. That was a fun couple of weeks.

The "we'll just disable everything until somebody complains" strategy. Idiots!

Often times it's the only strategy because most admins or system owners have no clue what services they actually need

SSL proxy, in a company full of developers, so they could sniff traffic. It broke everything. It's one of the reasons I left that company.

I used to work with a guy who glued the USB ports shut on his labs. I asked him why he didn't just turn them off in BIOS and then lock BIOS behind a password and he just kinda shrugged. He wasn't security, but it's kinda related to your story.

¯\_(ツ)_/¯

Security where I work is pretty decent really, I don't recall them ever doing any dumb crazy stuff. There were some things that were unpopular with some people but they had good reasons that far outweighed any complaints.

I completely hear you.

When they did this for the stated reason of preventing data theft via thumb drive, the mice & keyboards were still plugged into their respective USB ports, and if I really wanted I could just unplug my keyboard and pop in a thumb drive. Drag, drop, data theft, done.

Further to this madness, half of the staff had USB hubs attached to their machines within a week which they had purchased at dollar stores. Like...?

At any time, if I had wanted to steal data I could have just zipped it and uploaded it to a sharing site. Or transferred it to my home PC through a virtual machine and VPN. Or burned it using the optical drive. Or come up with 50 other ways to do it under their noses and not be caught.

Basically just a bunch of dingbat IT guys in a contest to see who could find a threat behind every bush. IT policy via SlashDot articles. And the assumption that the very employees that have physical access to the computers... are the enemy.

Okay I'll concede that SOMEWHERE in the world there exists a condition where somebody has to prevent the insertion of an unauthorized thumb drive, they don't have access to the BIOS, they don't have the password, or that model does not allow the disabling of the ports. No other necessary devices are plugged in by USB. Policy isn't or can't be set to prevent new USB devices from being added to the system. And this whole enchilada is in a high-traffic area with no physical security and many with unknown actors.

Right.

I just wrote a script that let me know if usb devices changed and emailed me. It was kinda funny the one time someone unplugged a USB hub to run a vacuum. I came running as like 20 messages popped up at once.

They forbid us to add our ssh keys in some server machines, and force us to log in these servers with the non-personal admin account, with a password that is super easy to guess and haven't been changed in 5 years.

In high school they blocked dictionary.com for some reason.

The IT company I work for purchased me, along with some number of my coworkers and our product line from my former employer. Leading up to the cut over, we’re told that on midnight of the change, our company email will stop working. No forwarders or anything. BUT, we will get a new email that consists of gibberish@stupidsubdomain.company.com. When the password on this new account expires, because we can’t change it because we’re no longer employees, we have to go to a website to request a password change. This emails us a link to our new company email address, but we can’t use that link. We have to manually change part of the URL for it to work. I had them manually change my password twice before I gave up on the whole process. Figured I didn’t work for them anymore. What would they do if I stopped using this bogus account/email address, fire me?

Is it actually gibberish? I have never seen a company use anything other than parts of first name last name at company.

I’m sure it meant something to someone, but it was just letters and numbers to me.

Very short screensaver timeouts, useless proxy, short timeouts from intranet pages, disabled browser extensions, to make impossible to automatize our very repetitive work, daily DB access requests for work, etc.

Our IT mandated 15 character long passwords. Many people in manufacturing (the guys who make the stuff we produce or setup and fix the machines) have the passwords in the format: "Somename123456..." You get the picture. When the passwords are forced to change? Yeah, just add "a,b,c,d..." at the end. Many have it written down on some post-it note on the notebook or desk. Security my ass.

I wouldn't be surprised if I found that office guys have it too.

At a place I used to work one of my coworkers just had their password as a barcode taped to their desk. Now to be fair we worked in the extra high security room so even getting access to that desk would be a little tricky and we had about 20 unlabeled barcoded taped to each of our desks for various inventory locations and functions. So if someone wanted to get into their account they would still have to guess which barcode it was and get into a room only like 10 people had access to. It still felt pretty damn sketchy though.

machine had a RW optical drive

Ah, the Private Manning protocol.

Less the Lady Gaga obfuscation.

We had 40,000 blank discs laying around at all times... because they were a regular part of sending art/data proofs to customers.

o_O

Mine refuses to use ipmi. Also all switches use the same password.

I was a network administrator at a site, which just made me a glorified system admin with responsibility for the network and switches.

Everyone in the IT Dept had the password for the switches. After one person gave a 3rd party vendor the password, I had to change the passwords and exclude him from having it.... but then everyone else got the password.

That place was nuts, between that and a few other stupid boss actions, I just moved on. Found a much better job and it was for the best.

I dunno, gluing usb's in a super sensitive environment like that is actually logical; on the disc drives - they could disable autoplay as well though removing or gluing them closed would be preferable. USB is just such an easy attack vector where the individual plugging it in may not have skills themselves - it might be easier to bribe cleaning folks for example - or inject a person into a cleaning team. Ideally they would attack multiple nodes of your target's network via as many avenues as possible; which makes the network and vpn thing just silly indeed; perhaps they were waiting for someone to try something with excellent infosec / firewalls / traffic shaping. yeeeeah lol.

So like.. Unplug the mouse and plug in the thumb drive... Bam!

That's obvious when a mouse or keyboard doesn't work. OP, and clealy other people in here, don't really understand the actual attack vector in play. They aren't using the USB as data storage, they are using as a cellular connected RAT and/or a tool to deploy a RAT to a workstation.

I think gluing usbs is dumb in just about any environment (disable them on the BIOS is the right answer), but attackers aren't using it to drag and drop files and then physically take the usb with them. They are plugging them into a workstation, or just leaving them in the parking lot and letting other people plug them in, leveraging them to get initial access, and then essentially abandoning them.

For example see stuxnet: https://en.m.wikipedia.org/wiki/Stuxnet

Pretty easy to make a hub device that you can plug the keyboard into and make it transparent to the user. Could even build in a keylogger to capture direct from the keyboard. The attacker would likely need physical access for that, so it wouldn't be as convenient as the thumb drive in the parking lot attack vector, but unless you're using PS/2 peripherals (or gluing those USB devices in too somehow), there's still a fairly open attack vector there, even if you are disabling unused ports in BIOS.

Yep you're right, but at least that adds another layer of complexity to their attack. A lot of security controls are at least somewhat situational, and most non-draconian companies have a process to put further mitigations around those exceptions either from increased monitoring or adding additional supplemental controls.

There's no such thing at perfect security, just better risk mitigation. Slipping in a usb hub between the computer and keyboard while someone isn't looking is a bit trickier then just plugging in a usb stick. If you disable unused usbs in the bios, instead of trying to do silly stuff like glue them shut, then the attacker has at least been temporarily thwarted if they slot it into a dead port. Aside from the high traffic areas, disabling ALL usb ports in places like datacenters and especially colocated datacenters, can thwart the attack outright as well.

Really from looking through this thread a lot of people seem to be under the misconception that security that isn't perfect is pointless. It's like claiming that locking your doors is pointless because lockpicks exists. The point isn't to keep a sophisticated attack at bay, but rather to keep script kiddies and drive-by attacks from hitting your network. To defend against sophisticated attacks you really have to go a bit crazy, and even then very small slip ups can be disastrous. Ask Microsoft about their root cert getting leaked via a core dump!

I fully acknowledge that many people also work for places with dumbass security controls. Gluing usbs is WAYYYY up there on that list in my opinion. It also looks like a lot of people work at places that have really shitty security teams that haven't quite figured out that controls are situational and require more thought then, "see checkbox, execute checkbox."

If it's a secure enough environment, I imagine that there will be monitoring on the device, and the moment a hub shows up that's not supposed to be there, or any other USB device tree that doesn't match the approved list, , alarm bells ought to go off. If it's valuable enough; the attack would be to use a passive device picking up leaky signals on the wire, or even hidden camera watching screen/keyboard.

I'm sure there are more elegant ways they could have disabled the USB ports, but this might have been partially to avoid users being able to accidentally compromise their device by sticking a thumb drive they found in the parking lot in to see what was on it. For exfiltration and VPN usage over the network there are other controls they can/likely had put in place that you may just not have known about

They were just paranoid dopes.

I would hear them talking about IT security the way 10 year old boys talk about defending their fort from zombies.

So… what was the zombie situation tho? Were they at least on top of that?

Well if we're following the metaphor, yes they were completely on top of preventing imaginary threats that wouldn't realistically ever materialize lol

1 more...

Some corporate BS screen lock application that replaces the built in Windows feature. It would take several minutes to log in because of that.

Fortunately you can kill the process with taskmanager and prevent the screen from locking entirely. Lol.

Everything only needed because it only helps to meet a security standard and to lower insurance. So much useless outdated stuff.

This is what i have to do to log into microsoft fuckin teams on my work laptop when i work from home...

  1. Unencrypt my laptop hardrive
  2. Log into my OS
  3. Log into the VPN
  4. Log into teams
  5. Use the authenticator app on my phone to enter the code that is on my screen
  6. Use my fingerprint on my phone to verify that i am the person using my phone...

Step 5 was introduced a few months ago because the other steps weren't secure enough. This is why half my colleagues aren't available when they work from home...

I suggested that we just use slack as our work chat and leave teams as a red herring to dissapoint extremely talented hackers.

Don’t reuse passwords!

But make them complicated!

Don’t write them down!

Change them every week!