Hacker Newsnew | past | comments | ask | show | jobs | submit | nthypes's commentslogin

This is an backend library? How to enable Live queries in the frontend?

Author here — thanks for checking it out.

Short answer: the core LinkedQL live query engine runs on the backend today, and there’s an embeddable variant (FlashQL) that runs directly in the frontend with the same LinkedQL capabilities – live queries, DeepRefs, etc.

1. Pure frontend / local data

For data that can live entirely on the client, you can spin up an in-browser FlashQL instance:

const client = new FlashQL(); // runs in the page / worker

await client.query(` CREATE TABLE users ( id UUID PRIMARY KEY, name TEXT ) `);

// Live query works the same way as on the backend: const result = await client.query( 'SELECT * FROM users', { live: true } );

From there, result is a live result set: inserts/updates/deletes that match the query will show up in the rows, and all the same features (live queries, DeepRefs, etc.) behave as they do on a backend instance.

At the moment FlashQL is in-memory only; persistence backends like IndexedDB / LocalStorage are on the roadmap.

2. Remote database from the frontend

If your source of truth is a remote Postgres/MySQL instance, the model we’re building is:

a LinkedQL engine next to the database, and

a FlashQL instance in the frontend that federates/syncs with that backend engine.

That federation/sync path is in alpha right now (early docs here: https://linked-ql.netlify.app/flashql/foreign-io ), so today the “stable” story is:

run LinkedQL on the backend against Postgres/MySQL,

expose whatever API you like to the frontend,

and use FlashQL locally where a client-side store makes sense.

The goal is that the frontend doesn’t need a special framework — just a LinkedQL/FlashQL client wherever JavaScript runs.


How you solve scale of Live queries different from Zero sync? zero.rocicorp.dev

Just use https://github.com/antl3x/Toolrag and avoid vendor lockin


Yes, it's my 4th email to Billing Support and getting "No" as answers. Moving to Azure..


That's my point though. Azure or AWS are just as bad and that won't solve anything. What are you running that requires a major cloud provider?


Implementing rate limiting in a scalable manner is harder than just making logging faster, and it makes the cloud provider money whereas rate limiting causes them to make less money. The incentive makes it pretty clear what side of the tradeoff cloud providers are going to come down on.


Last month, a single container in my GKE cluster (Sao Paulo region) entered an error loop, outputting to stdout at ~2k logs/second.

I discovered the hard way that GKE's default behavior is to ingest 100% of this into Cloud Logging with no rate limiting. My bill jumped nearly 1000% before alerts caught it.

Infrastructure (Compute): ~$140 (R$821 BRL) Cloud Logging: ~$1,300 (R$7,554 BRL)

Ratio: Logging cost 9.2x the actual servers.

https://imgur.com/jGrxnkh

I fixed the loop and paused the `_Default` sink immediately.

I opened a billing ticket requesting a "one-time courtesy adjustment" for a runaway resource—standard practice for first-time anomalies on AWS/Azure.

I have been rejected twice.

The latest response: "The team has declined the adjustment request due to our internal policies."

If you run GKE, the `_Default` sink in Log Router captures all container stdout/stderr.

There is NO DEFAULT CAP on ingestion volume which is an absurd!

A simple while(true); do echo "error"; done can bankrupt a small project.

Go to Logging -> Log Router. Edit _Default sink.

Add an exclusion filter: resource.type="k8s_container" severity=INFO (or exclude specific namespaces).

Has anyone successfully escalated a billing dispute past Tier 1 support recently?

It seems their policy is now to enforce full payment even on obvious runaway/accidental usage.


You could stop renting someone else's computer and run your own clusters til you have all the surprises figured out you know.

This is the danger of metered compute. Your ignorance is their profit opportunity.


This is a vibe-coded VSCode fork. In a simple task, I got overload/quota exceeded errors with horrible error handling. lol


Same for me also. I have Gemini Pro subscription, still it is showing quota exceeded error.


It's a shame not even mention the amazing work of VSCode.


The VSCode homepage does not mention Chromium or Electron.


It might sound smart your defense of Google but VSCode is not a fork or electron and chromium, there's a lot of work there, not just clicking a fork button


For me it's about the value-to-token ratio feeling off, as I stated in another comment.

> What bugs me most is that many of those 340M tokens feel wasteful? Like the LLM will use 50k tokens exploring dead ends before finding a solution that could have been expressed in 5k tokens. The productivity gain is real, but it feels like I'm paying 10x more than what should be "fair" for the actual value delivered.


I've asked Sonnet 4.5 (Cursor) to generate some stats based on my cursor CSV file usage. Here it is the output:

Key Findings from Your Cursor Usage Here are the most interesting insights from your usage data:

Cost Summary

Total Cost: $928.45 over 70 days

Average per Request: $0.06

Cost per Task (Request): Ranges from $0.00 to $2.78, with 65.7% costing under $0.05

Projected Monthly Cost: ~$416 (based on average daily spend of $13.86)

Request Patterns

Requests per 5 Hours: Average 70.7, ranging from 1 to 451

Average Time Between Requests: 6 minutes 33 seconds

Median Time Between Requests: Just 13 seconds (shows bursts of activity)

Peak Activity: 1-2 PM (10.4% of all requests at 1 PM)

Busiest Day: Saturday with 21.7% of requests

Token Efficiency

Average Tokens per Request: 83,371 tokens

Median Tokens per Request: 38,342 tokens

Average Output per Request: 876 tokens

Cache Hit Rate: 88.8% (excellent! saves money)

Cost per 1,000 Tokens: $0.0009 (very efficient due to caching)

Cost per 1,000 Output Tokens: $0.14

Notable Stats

Most Expensive Request: $2.78 using 6.8M tokens (mostly cached)

Total Hours of Active Usage: 1,692 hours (~9 requests/hour)

Most Used Models: claude-4.5-sonnet-thinking, claude-3.5-sonnet, and others

Your cache hit rate of 88.8% is excellent and is saving you significant costs! Without caching, your costs would be much higher.


Thanks for the input! I'm checking on Claude Code Max now - from what I'm seeing, even the $200/month plan has weekly rate limits (240-480 hours of Sonnet 4, 24-40 hours of Opus 4 per week).. so not quite unlimited tokens either, though definitely more predictable billing.

$638/6 weeks won't make me broke, but here's my main issue: for me it's about the value-to-token ratio feeling off.

What bugs me most is that many of those 340M tokens feel wasteful? Like the LLM will use 50k tokens exploring dead ends before finding a solution that could have been expressed in 5k tokens. The productivity gain is real, but it feels like I'm paying 10x more than what should be "fair" for the actual value delivered.

Maybe this is just the current state of AI coding - the models need that exploration space to get to the answer. Or maybe I need to get better at constraining the context and being more surgical with my prompts.

For me as a founder, it's less "can I afford this" and more "does this pricing model make sense long-term?" If AI coding becomes a $5-6k/year baseline expense per developer, that changes a lot of unit economics, especially for early-stage companies.

Are you finding Claude Code Max more token-efficient for similar tasks, or is it just easier to stomach because the billing is flat?


i think when you are testing out ideas you cannot also be thinking about how efficient that is. it doesn't make a lot of sense unless the problem you are trying to solve is efficiency. So like, I get your point, but I don't think anyone is wasting tokens, the LLM explores different solutions and arrives at ones that work. You seem to not want to pay for the tokens used on bad solutions, but they were useful to find the actual solutions. I also think that there are plenty of software licenses that we pay for in my work that are multiple times 5-6k/year and yet all our software is much cheaper than the salaries that cover our developers. Good developer tools are always worth it imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: