The Semantic Web was a long-term goal that the Web as a whole rejected. The underlying technologies have been applied to many niches.
What's the current status of the Semantic Web? What practical applications are better than the popular combo of traditional software and machine learning?
What you describe sounds nothing like most, safety-critical development I've heard of. Whereas, I've heard the other person's story countless times when I studied high-assurance systems. Very slow, top-down, process-heavy, paperwork-heavy, and outdated tools.
On the other hand, it sounds like the company you mentioned is worth imitating where possible. They sound awesome. Are you allowed to name them? Is there any writeup on how they balanced velocity and regukatory approval?
Unfortunately not. But the devices they make are absolute life savers and I found it one of the most interesting jobs I did in the last couple of years because I think I learned more from them than they learned from me. I was just focusing on a handful of details, they had to keep the broader picture in mind all the time and educate me to the point that my knowledge became useful to them.
You are probably right that they are uncommon, but the fact that the company was led by a scientist who was very much involved in the process and the mission and offloaded as much of the non-essentials of the CEO job to others made me feel I had gone back in time to be near HP when they were just founded. In the longer term I expect them to dominate the space.
The pilot was unconscious. They wouldn't he able to use a parachute or make it to an airport.
If an engine went out, the Garmin might still have a better shot at crash landing than an unconscious pilot. Maybe it could be trained to do emergency procedures. Or maybe it's a lost cause at that point. (I'm not a pilot.)
Yeah, it's targeting "micro"-controllers, not microcontrollers. I was hoping for a PyTorch solution to TF Lite.
This is still great, though. Previously, I thought a mobile model (eg speech/object recognition) would require me to learn both PyTorch and something like MLC in C++. Then, port them.
If this is as it appears, I could develop a small model that could run on mobile on my laptop, train it on cloud GPU's, test it locally, and use this tool to produce a mobile version (or save some steps?). That would keep us from having to learn C++ or MLC just to do mobile.
I mean, one still can learn other tools for their advantages. However, ML students and startups might benefit greatly from this by being able to rapidly develop or port mobile apps. Then, people learning other tools for their advantages build stuff that way. The overall ecosystem gets stronger with more competition.
The best courses I wanted to take are split between Coursera, Udemy, and EdX. The first two can give certificates cheaply. A merger could be really helpful if they do it in a month or two. ;)
It's true. I used to promote high-assurance kernels. They had low odds of coding errors but the specs could be wrong. Many problems Linux et al. solved are essentially spec-level. So, we just apply all of that to the secure designs, right?
Well, those spec issues are usually not documented or new engineers won't know where to find a full list. That means the architecturally-insecure OS's might be more secure in specific areas due to all the investment put into them over time. So, recommending the "higher-security design" might actually lower security.
For techniques like Fil-C, the issues include abstraction gap attacks and implementation problems. For the former, the model of Fil-C might mismatch the legacy code in some ways. (Ex: Ada/C FFI with trampolines.) Also, the interactions between legacy and Fil-C might introduce new bugs because integrations are essentially a new program. This problem did occur in practice in a few, research works.
I haven't reviewed Fil-C. I've forgotten too much C and the author was really clever. It might be hard to prove the absence of bugs in it. However, it might still be very helpful in securing C programs.
More than anything, they need to match and then exceed Singapore's text and data mining exception for copyrighted works. I'll be happy to tell them how since I wrote several versions of it trying to balance all sides.
The minimum, though, is that all copyrighted works the supplier has legal access to can be copied, transformed arbitrarily, and used for training. And they can share those and transformed versions with anyone else who already has legal access to that data. And no contract, including terms of use, can override that. And they can freely scrape it but maybe daily limits imposed to avoid destructive scraping.
That might be enough to collect, preprocess, and share datasets like The Pile, RefinedWeb, uploaded content the host shares (eg The Stack, Youtube). We can do a lot with big models trained that way. We can also synthesize other data from them with less risk.
They mostly imitate patterns in the training material. They do it in response to what gets the reward up for RL training. There's probably lots of examples of both lying and confessions in the training data. So, it should surprise nobody that next, sentence machines fill in a lie or confession in situations similar to ghe training data.
I don't consider that very intelligent or more emergent than other behaviors. Now, if nothing like that was in training data (pure honesty with no confessions), it would be very interesting if it replied with lies and confessions. Because it wasn't pretrained to lie or confess like the above model likely was.
They go in to see, hear, and smell good things. They experience some products first-hand in a way that shows whether they're as advertised or not. They also know what's out of stock with many, immediate substitution options. There's also more coupon, markdown, or haggling opportunities for those who want them.
Finally, walking into stores lets you connect to people. Those who repent and follow Jesus Christ are told to share His Gospel with strangers so they can be forgiven and have eternal life. We're also to be good to them in general, listening and helping, from the short person reaching for items too high to the cashier that needs a friendly word.
We, along with non-believers, also get opportunities out of this when God makes us bump into the right people at the right time. They may become spouses, friends, or business partners. It's often called networking. However, Christians are to keep in mind God's sovereign control of every detail. Many are one-time or temporary events or observations just meant to make our lives more interesting.
Most of the above isn't available in online ordering which filters almost all of the human experience down to a narrow, efficient process a cheap AI could likely do. That process usually has no impact on eternity for anyone. Further, it has less impact on other people. Then, I have less of the experiences God designed us to have. Which includes the bad ones that build our character, like patience and forgiveness.
So, while I prefer online shopping, I try to pray God motivate me to shop in stores at times and do His will in there. Many interestings things, including impacts on people, continue to happen. Some events hit the person so hard that, even as a non-believer, they know God was behind it. I'm grateful for these stores that provide these opportunities to us.
What's the current status of the Semantic Web? What practical applications are better than the popular combo of traditional software and machine learning?
reply