Been at this for a few weeks now with no problems... again... seductive indeed. Here is my original post if you missed it http://www.murga-linux.com/puppy/viewto ... 229#518229
Having FINALLY got a reasonable version of Google-Chrome to run on Puppy (thanks to forum members who helped http://www.murga-linux.com/puppy/viewtopic.php?t=68911, I am just about to jump into the 'App Store' and see what that is all about
Cloud Computing, a Huge Step Backwards
Nimbophobia: 4 more reasons to fear the cloud
http://notes.kateva.org/2011/06/nimboph ... -fear.html
It's been a gratifying week for my fellow nimbophobics. Our numbers are growing by leaps and bounds. Consider just four examples ...These stories range from appalling (Apple) to annoying (excess ads in custom search pages). The Google PHR fail would be the worst, but it's somewhat mitigated by the data exit options they provide and by the two year warning. Those options include CCR XML migration to Microsoft's HealthVault [1].
- * Apple MobileMe transition and iCloud: Those beautiful online albums of thousands of images and videos? Kiss them all good-bye. Your massive web site? Sayonara.
* A few weeks after being caught dissembling about their encryption keys, Dropbox accidentally removes all security for all accounts for four hours.
* Google is shuttering its hugely hyped Google Health Personal Health Record. Hope you weren't relying on those online medical records you entered.
* My longstanding and much appreciated Google Custom Search pages are, as of today, abruptly overwhelmed with copious top and side Google ads.
Friends don't let friends rely on the Cloud. Don't put anything in the Cloud unless you have a way to move your data to an alternative platform. That's as true for your business processes as it is for your family photos.
[1] Any health informatics students looking for a semester project or an easy publishable paper? Create a PHR in Google Health Records. Export as CCR XML. Import into Microsoft HealthVault. Write a paper on the data loss.
[url=http://www.murga-linux.com/puppy/viewtopic.php?t=69321][color=blue]Puppy Help 101 - an interactive tutorial for Lupu 5.25[/color][/url]
Hello,
Probably need Cloud Insurance
Probably need Cloud Insurance
Close the Windows, and open your eyes, to a whole new world
I am Lead Dog of the
Puppy Linux Users Group on Facebook
Join us!
Puppy since 2.15CE...
I am Lead Dog of the
Puppy Linux Users Group on Facebook
Join us!
Puppy since 2.15CE...
Whos next?
How Apple and Amazon Security Flaws Led to My Epic Hacking
http://www.wired.com/gadgetlab/2012/08/ ... cking/all/
How Apple and Amazon Security Flaws Led to My Epic Hacking
http://www.wired.com/gadgetlab/2012/08/ ... cking/all/
Gee, what's this thing we call 'cloud computing'?
Well, you have a central datacenter with a bunch of internetworked servers (interconnected computers/systems to process lots of data at once). We can't let one person have full (root) access, or let one person hog everything in the datacenter; that would be too dangerous and/or expensive. So everybody shares the resources (user limits) and they are able to login from their home/work/etc. computers (remote systems) and access their allotted portion of the datacenter.
Gee, where have I heard of this before?
Well, in the '70s and '80s (and a little bit in the '60s), we had gigantic computers (we called them "mainframes", and boy were they huge!) that didn't do very much and couldn't be moved around hardly at all. So people had 'terminals' (remote systems*) which they used to login to the central computer and do their work or communicate (the old BBS's, Bulletin Board Systems). Obviously, if one person had full (root) access, or hogged too much computing power, there would be problems, so there were rules about that (user limits). We called it "computer timesharing" back then, though.
Gee, everything old is new again?
Yep. Funny, isn't it?
*in the older "timesharing" methodology, the terminals were often what are called 'dumb terminals' -- little more than a screen, keyboard, and (sometimes) printer, connected to the central mainframe computer via wires. The closest modern equivalent is the 'thin client', which is a low-power computer that basically is a fancy interface between a server and a user. Not much difference, then!
Well, you have a central datacenter with a bunch of internetworked servers (interconnected computers/systems to process lots of data at once). We can't let one person have full (root) access, or let one person hog everything in the datacenter; that would be too dangerous and/or expensive. So everybody shares the resources (user limits) and they are able to login from their home/work/etc. computers (remote systems) and access their allotted portion of the datacenter.
Gee, where have I heard of this before?
Well, in the '70s and '80s (and a little bit in the '60s), we had gigantic computers (we called them "mainframes", and boy were they huge!) that didn't do very much and couldn't be moved around hardly at all. So people had 'terminals' (remote systems*) which they used to login to the central computer and do their work or communicate (the old BBS's, Bulletin Board Systems). Obviously, if one person had full (root) access, or hogged too much computing power, there would be problems, so there were rules about that (user limits). We called it "computer timesharing" back then, though.
Gee, everything old is new again?
Yep. Funny, isn't it?
*in the older "timesharing" methodology, the terminals were often what are called 'dumb terminals' -- little more than a screen, keyboard, and (sometimes) printer, connected to the central mainframe computer via wires. The closest modern equivalent is the 'thin client', which is a low-power computer that basically is a fancy interface between a server and a user. Not much difference, then!
I see the cloud as a commercial response to advancing technology.
That is to say that they realized that storage space is increasing in capacity while decreasing in physical size, to the point that we may soon be able to hold such extremely vast quantities of data that we'll eventually be able to hold all the information in the world on one chip in our hand.
When this happens and processor speeds sufficiently increase, a civilian could complete very large computing projects. This could compete with industry.
I don't think google or the government wants you developing a "warp drive" or something that competes with their products, so the only way to stop that sort of capability is to squash it before it gets here.
If they make it so you can't buy a computer that stores data, and they choose who has access to storage, then the corporations can limit such development even when we get 375 core "octohertz" processors and 900 terabytes of RAM.
At that point, only "hackers" that butcher equipment and make new storage solutions or otherwise circumvent the cloud will have any ability to challenge this sort of thing.
That is to say that they realized that storage space is increasing in capacity while decreasing in physical size, to the point that we may soon be able to hold such extremely vast quantities of data that we'll eventually be able to hold all the information in the world on one chip in our hand.
When this happens and processor speeds sufficiently increase, a civilian could complete very large computing projects. This could compete with industry.
I don't think google or the government wants you developing a "warp drive" or something that competes with their products, so the only way to stop that sort of capability is to squash it before it gets here.
If they make it so you can't buy a computer that stores data, and they choose who has access to storage, then the corporations can limit such development even when we get 375 core "octohertz" processors and 900 terabytes of RAM.
At that point, only "hackers" that butcher equipment and make new storage solutions or otherwise circumvent the cloud will have any ability to challenge this sort of thing.