On viruses

Feb. 9th, 2020 09:45 am
izard: (Default)
I was thinking about ergodicity in application to viruses like common cold, flue and like. I mean how "horizontal" probabilities (across population) correspond to "vertical" (temporal) probability for one person. I can only make experiments with one subject, so I performed two experiments.

1. It is said that flu vaccines makes it highly unlikely to catch the current flue. Across population. My experiment was 7 vaccinated consecutive years and 9 unvaccinated (and counting). When vaccinated, I had flue 6 times, and when unvaccinated, got it once.
2. Usually common cold virus used to make my life uncomfortable for ~4-6 days. Three years ago I infected myself with a probiotic bacteria for nose, and after my immune system coped (that took ~5 days of high fever), I no longer have 4-6 days cold, now typical virus symptoms take just 2 days, and then I have ~5 days of bacterial symptoms that are much milder and do not interfere with my work and active life.

Obviously if it works for me, there is no guarantee that it scales.
I just wonder if second experiment interferes with first..
izard: (Default)
Here is my take after couple of months of playing with VR:
Vive, Oculus at al are cool, but they moved VR to a wrong direction. There should be no feedback from headset movements, just like it was in VR from 90s. The problem should be solved not from a technical setup perspective as now with sensors, but from bio-medical perspective, e.g. using a drug that temporarily weakens some links to cerebellum. So one can lie wearing a headset, and experience all kinds of movements in an open world. The only physical feedback loop that is absolutely necessary is eye tracking.
izard: (Default)
When browsing internet recently I stumbled to a very interesting and bright researcher. A MIPT graduate, who was doing some engineering/research work for last 30 years, and now I guess he decided to have some fun writing books and publishing some of his patents on zhurnal.lib.ru.

Patents he chosen to publish are very interesting:
1. The best one: a patent describing a method of crafting a thermonuclear bomb without an atomic bomb fuse. Could be assembled in a home lab for cheap.
2. The next best: a patent describing a weapon capable of killing hordes of people, regardless of personal armor and leaving no visible traces or property damage.
3. A patent describing method and apparatus :) for obtaining free electricity from the skies. The biggest question is how do you convert DC~50kV/5kW to AC220V/50Hz?
4. Bonus game: author also describes a pogo stick on steroids. Not yet a patent, just a topic of a diploma project of some student.

The guy's political views are ultra-left, anarcho-communist. Writing style of his fiction novels sucks big time, but the content is thermo-nuclear! Some times literally - including recipes of explosives you can make using butter, salt and laptop power adapter. Here is a good review of the books.

P.S. Waiting for black helicopters to arrive.
P.P.S. Dear black helicopters masters, there is no need to disturb your ppl in Garmish and sending a helicopter to Munich. I will come to visit your lair in 1 week.
izard: (Default)
Always on and connected mobile experience:

1. Free app utilizing NFC to open doors/turn on/turn off engine of a car. When app becomes popular, push an upgrade which records acceleration after car engine is on and roughly calculates the speed. When speed is grossly over limit, calls police or better automatically transfers fine using online banking.

2. When you bring a broken phone to service center, they check a log of an app which uses accelerometer to record all free falls, and if there were any void the warranty.

Last one with black humor under cut.
ExpandRead more... )
izard: (Default)
I just had a small surgery and a doctor recommended me not talk and not to walk outside for one day.

So I am reading a new novel by Gibson, "Zero history", bought it last week in Powells. In first 50 pages, the protagonist tells someone that GPS are especially inaccurate near the "sensitive sites". I have noticed that GPS is indeed sensitive to current US military agenda. E.g. since our wedding trip in Jordan, I never had observed better GPS reception, not even close. I assume that helped US military in the region at that time.

But making results less accurate? My GPS receiver is a passive device (I hope) it just gets time signals from satellites (clock, ephemeris, almanac). The public time signals are broadcasted so they can be degraded but that distorted signal would cover a great land area, not just near a sensitive site.

What else? I have only few hypothesis, sorted by likelihood as I understand it.
1. Selective GPS jamming by ground station near sensitive site.
2. GPS vendor makes sure it reduces accuracy when near sensitive site. How to add sensitive sites when chip is already in the market?
3. Like for M-code, a directional high gain antenna can broadcast distorted data to specific small region.
4. GPS receiver is not a passive device, aaaahhh black helicopters are chasing me!
Anything else?

While I think 1. is most likely, Re: 2, open source GPS anyone, may be via SDR?
izard: (Default)
I am slowly working on an idea. Doing social auto-tune killer. My brother who is in music industry explained me that I did not invent anything special, you can buy a s/w like that starting from 250$.

I produce very messy code usually so have to use anything that helps to make it tidier. That is why using clojure:
ExpandRead more... )
This is not final code of course, but at least it looks maintainable. If I was writing in Java or C++ that would have been a very bad mess.
izard: (Default)
Facebook plugin that allows user to download midi file, record a song, upload, improves the voice to make it fit the tunes and copy to user's profile. Technically doable, huge potential user base (teens/школота). Monetization - paid places in ratings :) Profit!
izard: (Default)
Just posted a short article on habr. All in Russian.
izard: (Default)
[livejournal.com profile] nadekuk mentioned in the comments to previous post that Seychelles' waters are frontline of the war with Somalia's pirates. However for now it is all limited to hijacking ships with small crew - fishing, cargo, all for ransom.

Somali and Mogadishu are not too far from Seychelles - closer even then Reunion. We met there plenty of tourists from Reunion, but nobody from Somalia. Why pirates don't attack the shores? The country is rich (especially when comparing to Somalia), plenty of well-off tourists, no coast defense systems.



Previously famous pirates bases like Tortuga, Berbers were notorious for attacking nearby inhabited and wealthy shores, plundering and taking hostages. It made places like Costa Rica create a monstrous and expensive coastal defense. It was difficult for pirates to successfully attack, but some times they managed.

The reasons it is safe on Seychelles (and elsewhere) is I think simple: previously the wealth/assets was more liquid. Now real value is either in form of bits in a computer or shares/securities. The slave market is also not so widespread.

When Morgan plundered Panama what did he took? Silver, gold and prisoners. Captured cargo of spices or silk could be outrageously expensive and easy to sell.

Now they can only get ransom from ships' owner/insurer, and that is it! If pirates would capture a remote Seychelles island, what will they take?? Cash from tourists pocket?!
izard: (Default)
Airplane flight recorder, an orange device, is now based on SSD AFAIK. Previously it was using magnetic storage.

When flying with Emirates today I've noticed they have a pico-BTS on board, which is apparently using a satellite uplink. To implement more or less real time GSM signaling uplink must be always online. So what is preventing from using the same link to save flight data to a remote location that is more secure then a device that some times can hardly be found?

P.s.just got back home, trip report from Seychelles is coming soon. Will have just to read/scan through around a thousand e-mails.
Today's sunrise on Victoria/Mahe airfield:
izard: (Default)
Contrary to popular belief, Google is not a wannabe-big brother. If it was, here is what would have it done already:

In the very same way as google adsense/adwords, it could have created a service integrating private web cameras facing streets with Google maps/street view, and serve some local ads. They have all infrastructure ready: AdSense, google maps, youtube. Just some integration work and they have truly evil but very useful service.
izard: (Default)
It's been one and a half years since I first decided I will implement a personal cloud solution. Six months ago, I realized how to make it secure for a service provider side. That was the biggest technical issue I needed to resolve to make it fly.

It was always difficult to find time to work on it, because there are always important work related projects that obviously had a priority. So I cheated: I've registered this project as a demo for an internal conference. This gave me an opportunity to use company's h/w to work on it, but 90% of the coding I had to do in my free time any way. However, a clear deadline (this Monday) helped a lot in motivating me. I managed to finish coding of a prototype which I can show to wider audience few days before the show :)

The demo was a success: it was selected as 3rd out of 40 projects. It was the only one out of top 20 that was done by single person as an unofficial project. Now thanks to feedback I got on this conference in Portland (I am writing this from PDX on the way home now) I have a clear picture about technical and marketing opportunities for AdHoC.

It is skunksworks project which is not really related to my job so I think it's safe to post about it before I get a formal approval to make it open source. (If I don't have the approval and someone will do something similar and open source I'll be happy too.)

AdHoC is an service to enhance user experience on Small Form Factor(SFF) devices. It allows _secure_ remote execution of any application on a "close" x86 box, redirecting a screen and input to SFF device.

I call it cloud because it can satisfy 4 out of 5 clauses of formal NIST cloud definition. It is PaaS variation.

Here is a technical description for those who might be interested:
ExpandRead more... )
izard: (Default)
I've been doing embedded development since 1999, but only occasionally. So I cannot consider myself a real expert here.

Now yet another time I wish if I had a tool that could help me with a very typical need:
I have an embedded x86 platform running a rather stripped version of Linux, with normal kernel but very few libs and e.g busybox or newlib instead of glibc. I have an app on real linux, which links dynamically with many other libs and modern version of glibc. I need to make it run on a target.

The usual course of action is to carefully re-compile the app and all dependencies for a target. However, it would be a killer app (or just a greatest method) if I could just creatively use binutils to recursively get symbols application exports, and link all of them to single static lib, up to glibc functions.

I think this is possible in theory to do such script, and I wonder why it was not done yet. May be because before host platform was usually x86, and target platform was something else? Now thanks to Atom it's x86 on both sides of JTAG cable very often. Or may be I am missing something and it was done, or I am missing something fundamental and there is something preventing this approach to succeed???
izard: (Default)
struct sk_buff is quite big (for a reason), 4 cache lines.

The way it's being accessed in a fast path (in reverse order), when system is running a network throughput benchmark fools h/w prefetcher on x86, makes it prefetching random data. This becomes quite noticeable on 10G, I've checked with Vtune.

There may be a way to re-arrange the layout of the struct so the prefetcher would stop issuing unnecessary memory reads, saving cache capacity and memory traffic. But I would not bother doing it because it is I think too x86 specific to be accepted to the kernel. And yes, I have not tested on AMD either.
izard: (Default)
I am playing more with latest developments of NaCl. The obvious idea comes to the mind:
It's not just a perfect way of running untrusted code in browser. Server side (cloud) use could also be interesting.

Not limited to the personal cloud idea I am evaluating, this could be a viable basis for EC2 style app hosting. There were several SETI@home like idle cycles based distributed computing systems but all of those had problems with trust. Nobody is willing to run arbitrary code on even idle PC. To overcome this issue one could either make applications' developers live insanely difficult using certifications/managed sandboxes etc or... use NaCl :)

Home cloud.

May. 7th, 2010 08:58 pm
izard: (Default)
As now I move from being mostly Xeon performance focused to Atom performance focused on my daily job, I recalled my old idea. Technical, goes under cut ExpandRead more... )
izard: (Default)
I am working on improving a patch to Linux mm that allows h/w partitioning of last level cache. (This is useful for some real time scenarios, making process "warm-up" latency more predictable). Patch to mm looks not very good for general case, from both source "cleanliness" and performance points.
Now I wonder if moving this feature to KVM would make sense. It would be more "cleaner", but real time users hardly use KVM.
izard: (Default)
Shared cache in a CPU is a great thing for multicore - it allows efficient data sharing between cores and almost always efficiently shares capacity.

What if developer thinks cache is not shared fairly between e.g. 2 cores? There are no means to explicitly control this. But here is a workaround, a weird one though. If we write a custom allocator that only allocates data starting from addresses that go to 0-7th cache sets on 1st core, and 7th-15th sets on a second core, then we effectively make the cache a non-shared one. Unfortunately, the biggest continuous area of memory allocator could accommodate is 512 bytes then (64 bytes cache line multiplied by cache sets divided by 2 cores). The more data is allocated through this "weird cache-conscious allocator", the more fair it gets.

512 bytes cap is very annoying and thus likely not realistic for practical use, but if we would have 128-way, not 16-way last level shared cache, the cap would go up to 4k that would work naturally with OS VM mechanics :). Fortunately last level cache is tagged by physical address, so this would lift the continuous memory limit, not just make it 4k and move complexity from allocator to OS.

Upd: after careful study of prior art, it looks like I've reinvented a wheel, and made it a square one rather then round. There is a better way to partition shared cache than I've described above.
izard: (Default)
It's pretty apparent, but just in case: in about 2 years there will be few companies competing that provide a folowing service to billboard ads: a copy of the billboard in Wikitude/Layar/etc AR universes. Now there are none, and it would be difficult for first on the market to explain the value to customers.

But in few more years it will become mainstream.
izard: (Default)
I've been doing some crypto system tuning few weeks ago and it came to my mind that the same approach could be used to solve one of the problems online casinos are facing.

Goes under cut as it's a bit technical ExpandRead more... )
Page generated Jul. 1st, 2025 12:04 am
Powered by Dreamwidth Studios