I found that there's a surprising difference in quality for what feels like it should be a commodity item. All the outlets in my newish build were tamper-resistant, and pretty much as you described -- at best they were unpleasantly stiff and awkward to use, and some specific outlets would require a worrying amount of force and wiggling to plug anything in.
After a couple of high-usage outlets got jammed to the point that nothing could be plugged in, I replaced them with ones from the hardware store, and they are a big improvement. The existing outlets are unbranded, and I guess were from a bulk box of the cheapest that the electrician could source.
In my experience, Leviton are OK (much better than what was originally fitted), but Eaton are great -- they require slightly more force than non-TR outlets, but they're consistent, reliable, and I've never had to try more than once to plug anything in.
This is not a dissimilar system to Teletext[1], which transmitted data in the blanking interval of a broadcast TV signal, and could be interpreted by a TV or other hardware with appropriate support. Teletext was pretty widespread throughout Europe in the 1980s and 1990s.
It was typically used to transmit pages of information (news, weather, etc.) that could be viewed directly on the TV, but the BBC's Ceefax[2] Teletext service was also used to distribute software to the BBC Micro, when equipped with the appropriate Teletext Adapter[3].
In a similar fashion to the Sega Channel system, the Teletext system would broadcast looped data, with popular pages (such as news and weather) being repeated frequently so they would load quickly, and less popular pages taking longer to load (or more accurately, to wait for the next time they appeared in the looped data).
I was interested to see that the Sega system used a bitrate of 8Mbps, which sounded pretty high for the mid-90s, but I see that Teletext had a bitrate of almost 7Mbps for PAL broadcasts, despite being roughly 15 years older!
You got me interested in how the signal was transmitted, so I looked a bit more into it (see https://segaretro.org/File:SegaChannel_Applications_Scientif... ). It turns out that the 8Mbps number I eyeballed from looking at newspaper coverage of the service was incorrect. When the cable provider received the Sega Channel data stream, they'd split it into two 6Mbps carriers. This allowed them to transmit Sega Channel data without having to dedicate a channel to data, as they could put the carriers between cable channels or in the portion of the spectrum used for cable FM radio. I updated the webpage with the corrected figures.
Yeah, I was wondering how TV-like this channel looked.
It's tempting to wrap it in fake horizontal/vertical blanking so it still looks like a TV signal (and you can send it through existing equipment that's expecting a TV signal). Essentially just Teletext but using every single line.
But what Scientific Atlanta created is much closer to cable internet. The total bandwidth number is notable, each 6Mbit carrier uses 3Mhz of bandwidth, so the two of them add up to 6Mhz, which is how much bandwidth a standard NTSC channel occupies.
I suspect this is because they have rented a single TV channel worth of bandwidth on the Galaxy 7 satellite for disruption to local cable companies.
Splitting into two 3Mhz carriers has the additional advantage of allowing the receiver design to be simpler, it only needs to tune into one at any time.
So my question is - knowing that and looking at the marketing literature, is it possible to somehow recreate the signal to use on actual hardware.
It's possible to RF modulate composite sources over coax for home cable systems (think blonder tongue gear or even consumer gear) - since it is combined to the 6Mhz signal somewhere along the line could one not pipe this signal into a coax cable and then into the hardware to recreate it?
Am looking at setting up my own home analog CATV system and this would be the cherry on the cake. I guess the real question is what was the device expecting in that signal - what was being modulated out over the wire.
Teletext is still very much alive in Germany, pretty much every channel offers it. In fact, I use it most days to quickly check if there are any interesting headlines to follow up online, or for sports results. It's funny that it is frequently faster and easier to navigate than most enshittified news websites.
It was even crazier in Germany! In 2000, the television station NBC received a radio license for RadioMP3. They broadcasted the charts and entire albums (with covers) via teletext, which could be legally recorded at home. Bit rate 128 kbit/s - simply with a TV capture card. The public broadcaster also transmitted software via “VideoDAT” during its ComputerClub program. However, this required special hardware.
At a company I used to work at there was a service that scraped the teletext XML to get some numbers as a second source to double check what was scraped off a website.
> I was interested to see that the Sega system used a bitrate of 8Mbps, which sounded pretty high for the mid-90s,
This was over cable TV, so not very difficult to obtain these rates compared to general broadcast TV. Cable internet service rolled out in this time period with downstream rates of 40 Mbps per 6 MHz channel.
I can't speak to the cloud provider support, as I don't use it, but they have supported scheduled backups since at least June 2019, as I have locally-stored backups from scheduled Takeout jobs going back that far.
Bear in mind that the scheduling support is extremely basic -- the only option available is to schedule six exports, one every two months. You can't change the frequency or the number.
Also, you can't pick when the schedule starts, so if you want backups every two months indefinitely, you have to remember to schedule the next set of backups two months after the final backup of the previous scheduled job finished.
There were also several others, including some live shows, all of which can be found on the BBC's Computer Literacy Project Archive: https://clp.bbcrewind.co.uk/
I think the speed that things can go wrong when using a table saw (or most power tools) is faster than some people, including some woodworkers, might expect. There's a good example video here (warning, shows a very minor injury):
While we're still not talking microseconds, I think it highlights that moving the blade out of the way needs to happen very quickly in some cases to avoid serious injury.
Huh, it's probably that simple, I mean it doesn't explain why it thought it had definitely found the answer given there were a bunch more PMs to go, but yeah that does qualify Churchill.
As a counter-example, I haven't had an entirely successful Google Takeout export in at least a couple of years, using their service to schedule the exports automatically every two months.
I always have a failure with Google Fit data, which reports 'Service failed to retrieve this item' on the same JSON file every time. I assume this is something corrupted at their end.
It's not that uncommon for my exports to intermittently show failures with other services -- for example, my latest export, taken on April 20th, also failed to include one of my YouTube videos, with the same 'Service failed to retrieve this item' error. That video is usually included successfully, so I'm guessing this was a glitch.
Nothing major, but I can well believe that others also experience regular errors, although I'm sure we're in the minority.
The auto-calibration systems on cheaper consumer sensors also cause issues if they rarely see air that has low CO2 levels. Because these sensors can't measure absolute CO2 levels, only relative CO2 levels, they provide an absolute figure by looking for the lowest CO2 concentration they've seen over a period of time, usually around a 72 hour rolling window.
This works acceptably if the sensor is frequently exposed to outdoor air, but in a residential environment that's not always guaranteed, particularly in winter when it's not uncommon to keep windows closed to retain heat. In these situations the sensor will consider the lowest level to be around ~400ppm, even if it's actually much higher. This, of course, scales all other readings, so a sensor might read between 400-800ppm, leading you to believe everything is fine, when the actual indoor range is 800-1600ppm.
Because the auto-calibration happens over a period of time, it can be quite difficult to determine that your sensor is misreading, and the only way to fix it is to expose it to fresh air to reset the baseline.
The best solution I found to this is a dual-NDIR sensor which measures two different light frequencies, one which is absorbed by CO2 and one that isn't. This allows the sensor to know the absolute CO2 concentration, rather than the relative CO2 concentration, and avoids the need for auto-calibration. (I believe for absolute accuracy it still needs calibration for altitude, but for consumer use this makes such a small difference to be irrelevant).
Unfortunately, when I last looked, I couldn't find any consumer-grade sensors which used dual-NDIR sensors, only more expensive and less aesthetic commercial sensors. In the end I built my own using a CDM7160 sensor connected via I2C to a ESP8266, which reports over MQTT.
> Unfortunately, when I last looked, I couldn't find any consumer-grade sensors which used dual-NDIR sensors, only more expensive and less aesthetic commercial sensors. In the end I built my own using a CDM7160 sensor connected via I2C to a ESP8266, which reports over MQTT.
Looks like that's been discontinued. [1] Any advice for folks trying to build one now?
That's a shame, they've worked well for me. Unfortunately I can't recommend any others without doing some research, I found quite a few dual-NDIR sensor modules when I was looking and chose this one primarily based on availability and a reasonable datasheet.
A sibling post[1] mentioned a sensor that is listed as being dual-NDIR so should give reliable readings, and has a USB interface, so that sounds like one possibility.
This is true, but I don't think any consumer-grade sensor systems offer this as an option? I guess it might be useful if you're building your own sensor, and can't afford/justify a dual-NDIR sensor.
My understanding is that with single-NDIR sensors (like the MH-Z14 and 19), the auto-calibration is intended to overcome gradual particle buildup and beam degradation in the sensor chamber. While disabling it will prevent the scenario I described, you'll instead end up with gradual sensor shift as the sensor ages. I guess this could be minimised by manually calibrating the sensor outdoors on a regular basis.
Dual-NDIR sensors split a single beam into two chambers, so any degradation of the sensor beam affects both measurements, and the particle buildup in both chambers should also be approximately equal over time, so they should remain accurate over an extended period without any requirement for calibration. I built mine about 4 years ago and I do occasionally check to make sure they read ~400ppm when placed outdoors, last check was around 420ppm which suggests they're behaving reasonably well as they age.
Yeah, what they did not expect is the US blocking all exports. All vaccines had to be flown in from Europe, even so the same vaccine was produced just a few miles across the border.
Evidently, Canada didn't prioritize Q1 delivery dates. From Our World in Data [1], Canada's total dose administration fell behind the EU during February/March 2021, as Pfizer's European production facility went offline for expansion/refurbishment. The dose-delaying recommendation discussed in this article became official in March, at the peak of the 'vaccine gap'.
However, the overall procurement strategy was a success even in comparison to the EU. Canada's total dose administration rate caught up with the EU's by April at 25 doses/100 people, and during the spring/summer when vaccines became available to the general public Canada had greater availability.
From contemporaneous reporting, it seemed that in summer 2020 Canada believed manufacturers' assertions that they'd have vaccine production ready by 2021Q2.
In general, Canada's vaccine procurement strategy was successful, and the 'gamble' on second-dose delays discussed in this article proved useful at expanding availability among the general population. Complaints otherwise tend to focus on the US comparator (neglecting its effectively nationalized industry) or come from partisan political attacks.
'In order to provide more information about event severity within the S1 designation, S1 severity events have been separated into two columns in Table 1 based on whether each event is of sufficient severity to result in actual or simulated airbag deployment for any involved vehicle. Of the eight airbag-deployment-level S1 events, five are simulated events with expected airbag deployment, two were actual events involving deployment of only another vehicle’s frontal airbags, and one actual event involved deployment of another vehicle’s frontal airbags and the Waymo vehicle’s side airbags. There were no actual or predicted S2 or S3 events'
After a couple of high-usage outlets got jammed to the point that nothing could be plugged in, I replaced them with ones from the hardware store, and they are a big improvement. The existing outlets are unbranded, and I guess were from a bulk box of the cheapest that the electrician could source.
In my experience, Leviton are OK (much better than what was originally fitted), but Eaton are great -- they require slightly more force than non-TR outlets, but they're consistent, reliable, and I've never had to try more than once to plug anything in.