Hacker Newsnew | past | comments | ask | show | jobs | submit | TimTheTinker's commentslogin

Someone should create a minimal, nearly-headless macOS distribution (similar to the old hackintosh distros) that bootstraps just enough to manage the machine's hardware, with no UI, and fires up the Apple virtualization framework and a Linux VM, which would own the whole display.

There are a few others that are at least somewhat comparable. Justine Tunney comes to mind (especially the Cosmopolitan family of projects).

There's value in both implicit and explicit loops.

Some highly recursive programming styles are really just using the call stack as a data structure... which is valid but can be restrictive.


I for one would be would be very interested in a Redbean[0] implementation with MicroQuickJS instead of Lua, though I lack the resources to create it myself.

[0] https://redbean.dev/ - the single-file distributable web server built with Cosmopolitan as an αcτµαlly pδrταblε εxεcµταblε


They may not be completely objective, but you're probably not either. We'll all do best to listen to opposing points of view (especially those that are directly critical of our side) as they will likely have truth in them that our side doesn't.

Since it was censored in some significant parts of the world, including Silicon Valley, it could arguably be on-topic.

Strong disagree.

Gay porn is censored in a lot of Muslim countries, that doesn't make it on-topic for HN.


Generally, porn isn't included in "anything that good hackers would find interesting". Censored news arguably is.

Lots of hackers find porn very interesting. In fact, my first "real job" as a hacker was for a company with ties to the 1-900 industry that had decided to expand out onto the internet (not just to sell porn). Stories about porn would be interesting, submissions of nothing but pornography itself ("because it's censored!") are not.

I would be more sympathetic to the argument that this is relevant if the submission was an article about media censorship, or CBS's audience or leadership, and how said censorship, audience, or leadership relates to technology or emerging trends in media.

But this is literally just a controversial TV news broadcast, that people of one political persuasion say was "censored" and people of another political persuasion say was held off the air "temporarily" until it met network fact-checking standards. That sort of political bickering is most uninteresting, and is most definitely not why I've been reading HN for the past few decades.


Hackers generally appreciate links to hard facts and analysis that are not available through popular channels.

This seems similar to the "Is Github Down?" submission problem, where the submitter simply links to github.com.

That's a poor submission, because by the time most people click on it, Github will no longer be down.

There might be an interesting discussion to be had about outages at Github, but the better submission would be an article or blog post about the outage, not just a link to the site and a three-word title.

If someone wants to write an article or blog post about this news broadcast, which links to "hard facts and analysis not available through popular channels," that seems like it might be a worthwhile submission. But just a link to the broadcast by itself is not leading to interesting or on-topic conversation—the top comment right now is an ad hominem attack against Larry Ellison, without any supporting facts or analysis that he had anything to do with this story at all.


The HN guidelines don't mention a submission's suitability for discussion, only comments.

The very first subheading is entitled "What to Submit." I quoted it in my initial reply as rationale for why the people flagging this submission as off-topic were justified.

"On-topic" and "suitable for discussion" are partly overlapping sets.

That's why the Submissions guideline section addresses being on-topic but not suitability for discussion.


How is this emotionally driven? It seemed like a dispassionate presentation of factual material to me.

Of course, no presentation of facts is without bias of some sort (if only via their choice of which facts to present), so don't ever stop thinking critically. But flagging/censoring any presentation of facts (even biased) never helps, regardless of your viewpoint. If you disagree, write or promote a thoughtful take that explains why.

I'm politically very conservative, and I'm super grateful for this. The intense political polarization in the US tends to allow party-line adherence on either side to substitute for accountability to the truth, and that is a disaster regardless of which side is currently in power. Whatever side you're on, please have the guts to hold your side's leaders accountable to the truth, not just the opposite side's leaders. We will all suffer if just one side fails to do that.


So using a term as an ethnonym for historically British ethnic people is racist?

If so, is it racist to assert or assume that ethnic Europeans exist?


Social justice fundamentalism asserts that there are favored (“oppressed”) groups, and disfavored (“oppressor”) groups.

True believers have created a largely arbitrary grouping called “white people”, assigning it the “oppressor” label.

If a favored group’s nation were flooded by “white people”, that would be seen as an emergent situation requiring remedy; the opposite is what we’re seeing play out in societies like Britain, and is Not a Problem. I’m committing an act of violence by even describing it in this way.

How or when a disfavored group is restored to neutral or favored status is undefined; one would presumably have to consult a head priest of the movement for an answer (and I wouldn’t expect any coherence or clarity).


It sounds like a Marxist structure with re-assigned labels.

What the hell are you on about.

"Native brit" does not identify a people the way "native american" does.

There is no entry in the dictionary for "native brit".

This is all I'm talking about.

Quit trolling.


"The English people are an ethnic group [...] native to England." [0]

"[White Brits] is an ethnicity classification used for the White population identifying as English..." [1]

The English are the native and indigenous ethnic group to England (London). White Brits are a category that includes the English.

QED.

[0] https://en.wikipedia.org/wiki/English_people

[1] https://en.wikipedia.org/wiki/White_British


The English are not indigenous to Britain. The best case for an indigenous culture is the Celts, Cornish, and Welsh.

None of this has anything to do with being white, it's the language that defines belonging to these groups.


OP was trying to talk about ethnic brits, and I think that was clear from the context. He was then rebuked for that.

The OP was me. I pointed out how DHH's uses the term "native brit" to mean "white person" even though that is not the meaning of "native", which means you were born somewhere.

>I pointed out how DHH's uses the term "native brit" to mean "white person"

Nowhere in his post does he mention "White person." He specifically mentions "native Brits." The only indigenous Brits native to the Britain are White Brits.


He links to a wikipedia article and cites a percentage for "native brits". That number on the wikipedia page is for white brits.

The only groups who could call themselves indigenous to Britain are the Celts, the Cornish, and the Bretons. The English (Anglo-Saxon) culture is foreign to the British isles.

Even then, none of this is related to skin tone. It's the culture that defines these potentially indigenous Celtic groups.


> it skips over the one step the industry still refuses to do: decide what the software is supposed to do in the first place.

Not only that, but it's been well-established that a significant challenge with formally verified software is to create the right spec -- i.e. one that actually satisfies the intended requirements. A formally verified program can still have bugs, because the spec (which requires specialized skills to read and understand) may not satisfy the intent of the requirements in some way.

So the fundamental issue/bottleneck that emerges is the requirements <=> spec gap, which closing the spec <=> executable gap does nothing to address. Translating people's needs to an empirical, maintainable spec of one type or another will always require skilled humans in the loop, regardless of how easy everything else gets -- at minimum as a responsibility sink, but even more as a skilled technical communicator. I don't think we realize how valuable it is to PMs/executives and especially customers to be understood by a skilled, trustworthy technical person.


> A formally verified program can still have bugs, because the spec (which requires specialized skills to read and understand) may not satisfy the intent of the requirements in some way.

That's not a bug, that's a misunderstanding, or at least an error of translation from natural language to formal language.

Edit:

I agree that one can categorize incorrect program behavior as a bug (apparently there's such a thing as "behavioral bug"), but to me it seems to be a misnomer.

I also agree that it's difficult to tell that to a customer when their expectations aren't met.


In some definitions (that I happen to agree with but because we wanted to save money by first not properly training testers and then getting rid of them is not present so much in public discourse) the purpose of testing (or better said quality control) is:

1) Verify requirements => this can be done with formal verifications

2) Validate fit for purpose => this is where we make sure that if the customer needs addition it does not matter if our software does very well substraction and it has a valid proof of doing that according with specs.

I know this second part is kinda lost in the transition from oh my god waterfall is bad to yeyy now we can fire all testers because the quality is the responsibility of the entire team.


>an error of translation from natural language to formal language

Really? Programming languages are all formal languages, which means all human-made errors in algorithms wouldn't be "bugs" anymore. Some projects even categorize typos as bugs, so that's a unusually strict definition of "bug" in my opinion.


Sure, I guess you can understand what I said that way, but that's not what I meant. I wasn't thinking about the implementation, but the specifications.

Read again the quote I was refering to if you need better context to understand my comment.

If you have good formal specifications, you should be able to produce the corresponding code. Any error in that phase should be considered a bug, and yes, a typo should fit that category, if it makes the code deviate from the specs.

But an error in the step of translating the requirements (usually explained in natural language) to specifications (usually described formally) isn't a bug, it's a translation error.


The danger of this is people start asking about formally verified specs, and down that road lies madness.

"If you can formally verify the spec the code can be auto-generated from it."


Most formal "specs" (the part that defines the system's actual behavior) are just code. So a formally verified (or compiled) spec is really just a different programming language, or something layered on top of existing code. Like TypeScript types are a non-formal but empirical verification layer on top of JavaScript.

The hard part remains: translating from human-communicated requirements to a maintainable spec (formally verified or not) that completely defines the module's behavior.


I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).

Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.

For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.


One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.


In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)


I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.

GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.


Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.


I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).


Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.

The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.

Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.

It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.

There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.


Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.


> LLMs activate human social wiring and have a powerful effect

Is this generally true, or is there a subset of people that are particularly susceptible?

It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).


I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".

But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.


> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

https://xkcd.com/386/


Most stuff posted here are mostly ignored, though. If grounding to reality requires one to become viral first, we are cooked.


HN frontpage hardly requires being viral.

Just genuine intrigue from a select few.


Did you ever visit `https://news.ycombinator.com/newest` page? Like 99% of submitted topics are never seen by anyone but few wanderers.


I prefer the "New" page. Much more random.


Often.

95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,

and if your Show HN isn’t as widely applicable as this, those things might be important to think about.

Fairness aside, of course.


> HN frontpage hardly requires virility.

As far as I can tell, it doesn't require femininity either.

I'm guessing you meant "virality"


Sure did, thanks.


It’s still way easier the first time.

The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.


This wasn't a technical subject, and unrelated to HN. Just edited my post to clarify - thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: