Hacker Newsnew | past | comments | ask | show | jobs | submit | tgsovlerkhgsel's commentslogin

It reads to me like attempting to verify a malicious ascii-armoured signature is potential RCE.

"And that told me everything I needed to know about how Google really thought about things vs what they said they thought about things."

What you describe doesn't really provide much signal about this, because a big corp will always have a huge interest in having uniform working contracts. Exceptions are possible but only worth the headache with them for fairly high level employees. So even for a clause that they really wouldn't care much about, you'd expect a similar reaction.


I encountered a similar situation in my career. The work looked good, the team looked good, money was good.

Then when the work contract came up, there were some unusual clauses about my salary that I was not comfortable with. They first said that it was OK to ignore the clause as they would pay my salary as explained orally. I insisted that they write the work contract as they plan to pay me. After about 1 week of back and forth, they admitted that the clause was indeed unusual, was there for historical reasons and that they plan to change it in the future. However, they said no clause in the contract could be changed as of now, as it was the same contract for every employee, and no past had employee ever complained about it.

Unfortunately, I ended up declining the offer, as I considered the risk was not worth it.


> a big corp will always have a huge interest in having uniform working contracts

Then what they choose for that uniform tells us a lot.


If only there was a way for a uniform working contract that employees could collectively choose. Oh wait, no, companies don’t like that either.

Normalization of deviancy via law.

LLMs are a great interface for ffmpeg. Sometimes it takes 2-3 attempts/fixes ("The subtitles in the video your command generated are offset: i see the subtitles from the beginning of the movie but the video is cut from the middle of the movie as requested, fix the command") but it generally creates complex commands much more quickly than manual work (reading the man page, crafting the command, debugging it) would.

There is a classic pattern with incident reports that's worth paying attention to: The companies with the best practices will look the worst. Imagine you see two incident reports from different factories:

1. An operator made a mistake and opened the wrong valve during a routine operation. 15000 liters of hydrochloric acid flooded the factory. As the flood started from the side with the emergency exits, it trapped the workers, 20 people died horribly.

2. At a chemical factory, the automated system that handles tank transfers was out of order. A worker was operating a manual override and attempted to open the wrong valve. A safety interlock prevented this. Violating procedure, the worker opened the safety interlock, causing 15000 liters of hydrochloric acid to flood the facility. As the main exit was blocked, workers scrambled towards an additional emergency exit hatch that had been installed, but couldn't open the door because a pallet of cement had been improperly stored next to it, blocking it. 20 people died horribly.

If you look at them in isolation, the first looks like just one mistake was made, while the second looks like one grossly negligent fuckup after another, making the second report look much worse. What you don't notice at first glance is that the first facility didn't have an automated system that reduced risk for most operations in the first place, didn't have the safety interlock on the valve, and didn't have the extra exit.

So, when you read an incident report, pay attention to this: If it doesn't look like multiple controls failed, often in embarrassing/bad/negligent/criminal ways, that's potentially worse, because the controls that should have existed didn't. "Human error took down production" is worse than "A human making a wrong decision overrode a safety system because they thought they knew better, and the presubmit that was supposed to catch the mistake had a typo". The latter is holes in the several layers of Swiss Cheese lining up, the former is only having one layer in the first place.


I wish I had more upvotes for you. While the swiss cheese model is well known on HN by now,your post goes a little bit deeper. And reveals a whole new framework for reading incident responses. Thanks for making me smarter.

I don’t understand the point of this theory. Not having safety controls is bad, but having practices so bad that workers violate N layers of safety protocol in the course of operation is also bad. They’re both problems in need of regulation.

I was trying to focus on one specific pattern without making my post too long. Alert fatigue, normalization of deviance etc. are of course problems that need to be addressed, and having a lot of layers but each with a lot of giant holes in them doesn't make a system safe.

My point was that in any competent organization, incidents should be rare, but if they still happen, they almost by necessity will read like an almost endless series of incompetence/malfeasance/failures, simply because the organization had a lot of controls in place that all had to fail for a report-worthy bad outcome.

Overall incident rates are probably a good way to distinguish between "well-run organization had a really unlucky day" and "so much incompetence that having enough layers couldn't save them" by looking at overall incident rates... and in this case, judging by the reports about how many accidents/incidents this company had, it looks like the latter.

But if you judge solely on a single incident report, you will tend to see companies that don't even bother with safety better than those that generally do but still got hit, and you should be aware of this effect and pay attention to distinguish between "didn't even bother", "had some safety layers but too much incompetence" and "generally does the right thing but things slipped through the cracks this one time".


The failure rate of an individual layer of Swiss cheese should be bounded under most circumstances but not all. So you should probably have more layers when hazards cannot be eliminated.

Chernobyl reactor 4 explosion is a bit like this. Safety rules were ignored, again and again and again and again, two safety controls were manually deactivated (all within 2 hours), then bad luck happened (the control rod holes were deformed by the temperature), and then a design flaw (graphite on the extremity of the control rods) made everything worse until the worse industrial catastrophe of all time.

This has to be some form of GDPR or personality right violation (in countries that have that) that simple terms of service doesn't get them out of.

You don't entirely get to opt out.

If you crash into a BMW, you'll still have to pay for the now-inflated cost of the repair - most likely through the mandatory liability insurance. Insurance premiums have gone through the roof, to a big part not just due to insurance company greed but because cars and repairs to them became more expensive.


Or they may not understand how PDF works and think that it's the same as paper.

Especially with the "draw a black box over it" method, the text also stops being trivially mouse-selectable (even if CTRL+A might still work).

Another possibility is, of course, that whoever was responsible for this knew exactly what they were doing, but this way they can claim a honest mistake rather than intentionally leaking the data.


A while back I did a little work with a company that were meant to help us improve our security posture. I terminated the contract after they sent me documents in which they’d redacted their own AWS keys using this method.

> Or they may not understand how PDF works and think that it's the same as paper.

Yes; that's presumably included in being "amateurish" and "not following proper process".


With the expensive carriers, you nowadays get super cheap service but not the price...

My one and only experience with Ryanair was that they were rude and hostile even in places where they weren't trying to fleece you. From in-your face rude signs (official, corporate designed ones, not something printed from Word by a random employee), to a UI where you needed to concatenate strings in order to craft a valid input (something like "enter your credit card number, followed by #, followed by the MMYY validity date"). Maybe that was to make people fail checkin and force them to pay for checkin at the counter, but I think it was early in the booking flow, i.e. where they had no incentive to make it hard.

When was this? I have zero recollection of ever doing credit card number formatting anywhere.

Over 15 years ago, I think, and I have no idea if it was credit card numbers or something else. It was sufficiently crazy UX to stand out as crazy for a consumer-oriented website even in the much less polished web back then.

Yes, this sounds made-up/not Ryanair. I've used them for over a decade, paid with many different cards and have never encountered this with them (nor anywhere ever really).

He didn't mean it literally. Read the comment more carefully.

An unknown percentage of people actually want the insurance. If only 2% bought it despite such an extreme dark pattern, the 98-percentile of customers is much better than I would have expected.

It's true you don't know who wants it, but I thought capitalism was supposed to work by mutual consent and transparency of contract. If even one person is deceived, that's a scam! I doubt out of tens or hundreds of thousands of people all of them figured this out and wanted the insurance.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: