After this week’s Google/Microsoft column appeared in the New York Times, I got a message from an old friend, Rohit Khare, that sparked some thinking about our vulnerability as individuals when our data is held in the cloud — somebody else’s cloud. How do we save it, get it back, destroy it? Given the recent case of Facebook hanging-on to old user data essentially forever, this is not just a theoretical concern.
“’Cancellation on a whim’ is a key insight,” wrote Rohit. “After all, with desktop software you at least had the right to keep using what you wanted, as long as you kept the old hardware/software/OS running — I know of mission-critical custom apps still running on Win98!”
“SaaS (Software as a Service) for businesses may have SOME rights, but consumer services leave you completely out in the cold, and are unavailable afterwards at ANY price.”
Well Rohit, whom I met when he was an undergraduate at CalTech, who later earned a PhD from UC Irvine, and has worked for both the World Wide Web Consortium and some startups, ought to know, but I find myself wondering if, in fact, we really ARE out of luck when our SaaS vendor packs it in?
Say your favorite web-based application is suddenly unavailable because the group that wrote it has disbanded, are you truly SOL, those embarrassing pictures of your old girlfriends gone forever? Today I’d say “yes,” unless you’ve made a heroic effort to save the data in some database or file system that is unlikely to be compatible with the original application. No vendor I am familiar with has created an end-of-life (EOL) migration tool, though they certainly could. What we need is an API and I am here to propose one.
If web- and cloud-based application software is here to stay (it is) then there will inevitably be backup and EOL issues as platforms and companies die or are acquired. The best way to handle this in my view is for the Internet Archive to propose an Application Programming Interface to handle data migration and preservation issues for obsolete web applications and services.
I think there is nothing Brewster Kahle, the Archive’s founder, would like better than to preserve for posterity such code and associated data.
Imagine a social network app that would continue to work after death, at least to the extent that members could retrieve (or destroy) their own information.
The problem with this idea, of course, is that this also creates a very attractive interface for hackers and thugs. But I have an answer to that, too. What we need is the digital equivalent of one of those envelopes characters sometimes leave in books and movies — an envelope inevitably labeled “to be opened in the event of my death.”
Participating organizations would store compressed and encrypted versions of their data with the Internet Archive, where they would be held in an inactive state but updated frequently. Then, in the event of that outfit’s death, the digital envelope would be opened, revealing a decryption key and enough application code to get a vanilla version of the original net app up and running. It would, as archives are intended to do, preserve the final state of the application as well as its final data.
This is going to be a big problem in the future . I suggest we think about it now.
Meanwhile, back to Rohit, whose new company is called Ångströ. Two obscure punctuation marks in a single name tell me this is a company to spelled with. Ångströ apparently creates real time notifications when specific data changes in social network applications, creating a kind of cross-platform news feed about you or anyone else you happen to be stalking. Cool.
Everything Rohit is involved with is interesting. His threaded discussion group, Friends of Rohit Khare (FoRK) has been around for more than a decade and has long offered some of the most interesting (and funniest) examinations of technology news on the Net. It is now an age-restricted group on Yahoo.
Rohit does everything his own way. When he was still an undergraduate I employed Rohit and his business partner Adam Rifkin as consultants, bringing them to Japan to make a presentation to a major client. The trip was L.A. to Tokyo and back but Rohit and Adam parsed the United Airlines routes until they found a way to get to Tokyo for the same money by going around the world FIRST CLASS, earning quadruple frequent flier miles in the process. The boys arrived on schedule though lubricated the entire way with Johnny Walker Blue Label.
We dried them out a bit and their presentation was boffo.
An API is a sterling idea in the abstract, but honestly, how can you possibly think that a social site like Facebook would let a competitor (let’s just say Myspace) get their hands on it? I can’t imagin Google would hand the keys to their web apps to MS or Yahoo, either.
And even if by some gargantuan feat of lobbying/protest/legislation you managed to get the giants to sign up to an API, it would just be the ’90s MS “embrace and extend” strategy all over again. After each point release, nothing would work right because new flaws would be baked in to throw competitors off the trail.
The API would have to be a disinterested third party, who could be trusted to keep information secure until such time as it became permissible to release it. Unfortunately, trust is in awful short supply these days, so it is hard to imagine Facebook handing their code over to anyone.
Thanks, banksters!
Oh, jeez. As soon as I saw Rohit’s picture I knew this was going to be funny 😉
Anyway, good history lesson. I’m filing this under “Stuff I can tease Adam and Rohit about later.” More fun posts, please 🙂
-Erica
I think it’s a great idea, something a company could demand, but not Joe Sixpack on Facebook, not unless he joined forces to make some kind of collective demand, but that seems unlikely given that all Joe Sixpack seems to care about is…. himself. He can be bought with a $300 tax rebate, a free month’s credit, mortgage approval for a loan that he can’t afford and puts him deeper into servitude. He’s just too dumb and selfish to care… so he get’s what he deserves. Unfortunately, the rest of us get sucked into the whirlpool of his sinking ship. Welcome to ‘Merica.
The cash-strapped city of Los Angeles is considering dumping its old custom software (and hardware, and maintenance) to put data “in the cloud” at Google. This brings up lots of concerns, for instance keeping police arrest records very, very private.
https://www.latimes.com/news/local/crime/la-me-public-records17-2009jul17,0,4147298.story
@John: All Google has to do is roll out the Gtoken (Google 2-factor authentication token) as a paid upgrade to Gmail and Google Apps. Would have prevented the Twitter hack. A few more thoughts on that here: http://chrisco.wordpress.com/2009/07/16/why-the-twitter-breach-is-bullish-for-two-factor-authentication/
“Say your favorite web-based application is suddenly unavailable because the group that wrote it has disbanded, are you truly SOL, … Today I’d say “yes,” unless you’ve made a heroic effort to save the data in some database or file system that is unlikely to be compatible with the original application. No vendor I am familiar with has created an end-of-life (EOL) migration tool, though they certainly could. ”
LibraryThing is certainly aware of these concerns and has gone partway to mitigate them, in that you can, at any time, request the LibraryThing web site return all the data you’ve dumped into it as a CSV file. This may not be a perfect solution, although it is a start.
(For example, I have no idea, if LibraryThing went bankrupt, if they would be kind enough to send email to all their users telling them to grab their data now, before the servers are switched off; and one would of course need a replacement app/service that was willing to read in data from the LibraryThing CSV file.)
Today’s action by Amazon to “kill” purchased copies of “1984” and “Animal Farm” on all Kindle readers is not unrelated to concerns about cloud computing. This relates to the entire question of “ownership” when content is purchased or created in some association with cloud services.
For example, it is not clear to me that content that I create is still exclusively under my control if I store it on a cloud service. If the service is breached, and my content leaks out, I have lost control of it. I can try to convince myself that I still own it, but the truth is that once the leak occurs, I might as well have released the content to the public domain.
Likewise, when content is purchased in soft format but delivered to a proprietary system (Kindle, iPhone, etc.), I am certain that I don’t really own anything. All I have bought is a license to use the content until some entity in the cloud changes its mind, reaches out, and yanks it back.
Welcome to the future.
Cringe,
This is not a cloud issue. It is a backup issue. Regardless of where your data is stored, you should have a backup. In this case the backup would ensure you have your data when a service provider decides to change the ballgame on you.
The Amazon news today is game changing. I predict laws will be passed, and possibly Amazon has destroyed their business in one careless stroke. Once you lose the public trust, you never get it back. Look at Microsoft. Probably the most hated company in history.
Not quite, Mkkby — backing up 1000s of screens of your pages on Facebook isn’t quite the same thing. Your new cloud-y social life isn’t stored in a neat pile of Word docs in a folder; the very nature of the data you’re going to lose isn’t document-oriented anymore.
Even if you dumped out the entire database Facebook kept on you, how would you make sense of “Rohit sent you a Long Island Iced Tea using Booze Mail on 7/19/08 at 3:43:02AM GMT”? It’s a retired Facebook app, it’s a meaningless timestamp, and it probably won’t make sense glommed together with all the other cruft in your Facebook account transcript.
They key is context: knowing that you had just send me a Virgin Mary at 3:42 AM makes all the difference 🙂
More practically, the virtualization argument (“brain-in-a-vat”) has been best explored for preserving videogames. A DVD of someone playing Pac-Man hardly conveys the experience of playing it, and for now we’re lucky to have emulators galore still available.
But “replaying” Facebook a decade hence? Good luck — we don’t have a way to express that concept, much less a data-escrow API…
Mkkby: Good comment and topic: “Amazon has destroyed their business in one careless stroke. Once you lose the public trust, you never get it back.”
Although I think “destroyed” is too strong a word, and probably also “never.”
Probably the majority of Amazon customers don’t know or care about Amazon’s jack-booted thug maneuver, and/or are not smart or thoughtful enough to understand why it’s such a big deal, what it *tells* us about Bezos and Amazon’s board and management, that they are just (another) bunch of greedy pricks who care more about money than anything (principals, trust, honor).
Reminds me of the Facebook tells we’ve received (privacy, TOS, Beacon ad platform, more), and some others. It almost seems like the same rudderless people are involved in all these stories. That itself might not be surprising. It’s like politics seems to be, with the corrupt and unethical able to lie and cheat their way to the top (if your opponent is playing fair and you are playing dirty, maybe you have a higher probability of “winning” or getting them to give up in disgust). I hope I’m wrong, but that’s what it looks like (at least sometimes).
Software escrow has been around for decades. I’m sure a net-happy one is around the corner (if not already).
This might not be too hard under an Amazon-style cloud where the cloud consists of full virtual machines running under a reasonably popular virtualization system like XEN. Amazon already has the ability to backup the machine state and stick it somewhere else. Fire up your virtual machine under XEN somewhere else and you have moved you data AND it’s environment.
It gets more complicated when the environment is a network of virtual machines instead of a single virtual machine but the principle more-or-less holds true:
At this point it’s often simpler to deal with whole machines than the many setups for programs interacting inside machines.
Example: Appliance CD’s or VMWare machines where they throw in a Linux distro with the application already configured:
INGRES database/BI system? Sure, here it is running and set up under Rpath.
Network Monitor? here is one already configured to run under VMWare.
Disk Cloner? Here’s a live CD distro with a friendly frontend for a variety of disk and filesystem cloning tools. etc.
We are in the accounting saas business and plan to offer a data-escrow bases on XBRL-GL to our cloud customers. That way our customers will always have access to their data in a format based on an open standard. Although XBRL-GL is not yet widely adopted, we expect more tools will come to the market that offer access and conversion of XBRL-GL instance documents (the actual text files holding the data).
XBRL GL, the familiar name for XBRL International’s Global Ledger Taxonomy Framework, is a series of modular taxonomies developed by XBRL International and a framework for its extension for representing the information found in a typical accounting system using XML and the XBRL Specification.
XBRL is becoming popular rapidly as more countries are implementing XBRL as the standard for data exchange with government, tax, banking, etc. See https://www.xbrlplanet.org
Ah, the cloud, a faith-based computing initiative. Have faith that the provider actually knows what they’re doing with respect to backup, security, redundancy, etc. Have faith that the provider won’t be purchased by another company in order to kill the service in favor of the acquirer’s – the one you deliberately didn’t choose for reasons of your own. Have faith that the provider is actually a responsible business – reputation in the Internet Age having a history that can be measured in months.
Me? I’m an atheist.
In my opinion, the whole idea of cloud computing is stupid, not for the vendors of course. And I’m not talking about just data, but even apps.I’ve heard that “there is a sucker born every minute” and this would seem to bear it out.
It reminds me of working at a company where MS Office was on the network, not my desktop. What a pain in the ass, and I soon fixed it. Network slow some days, down others. Of course now a days I presume that never happens.
If you need to save money, cut off one foot and only buy one shoe—seems to me like a similar situation.
Scott
cschneid’s brilliant post hit the nail squarely on the head: the cloud truly is a faith-based computing initiative. And as a technician, I’ve found that faith doesn’t get you very far when it comes to computers.
Me? I’m with schneid.
I’m a little unclear as to WHY any company would want to spend money doing this.
I mean, as a user I’d love them to, sure. But when a company folds, shouldn’t it—in the best interest of shareholders—sell the data and the code to the highest bidder (probably their biggest competitor)?
Interesting idea. You leave out where a vendor deliberately shuts down a service in an attempt to force users to a new (incompatible) offering, that is more profitable. Losing that stick may be more than some software companies are willing to do.
“… that sparked some thinking about our vulnerability as individuals when our data is held in the cloud — somebody else’s cloud. How do we save it, get it back, destroy it?”
So, how secure, how “permanent”, how reliable is the cloud?
It simply isn’t. And it will probably never be. My two cents.
In the beginning of the nineties, we discovered some exciting pages on the early Internet, dealing with how-to’s, exciting new applications, early articles, newsreaders, whatever. We bookmarked them in order not to lose them. At some time I collected some six hundred bookmarks, nicely stored in a hierarchical folder structure. After a year, the disappointment was considerable, finding out that 80% or so of these bookmarks ended up nowhere.
At that time I printed exciting articles to a pdf file and saved them on my computer in order not to lose them. After a while I had hundreds of pdf files, nicely stored in a hierarchical folder structure on my PC, and I must admit that – until Google Desktop came around years later – it was hard to find back that one article in my folder structure.
In some way I agree with “Mkkby: This is not a cloud issue. It is a backup issue.” We learned this the hard way. Changing PC over and over again, experiencing disk crashes every other disk, evolving from a first 10MB hard disk towards today’s terabytes, if we wanted this exciting info to be available tomorrow, then we were forced to save it somewhere “safe”.
Hardly anything has changed.
Today, still numerous links presented by Google end up nowhere. That one site with the exciting article you really need disappeared. They either went broke or just disappeared, who knows, gone. If you’re lucky you will find it in Google’s cache. If you really needed it, if you really wanted to keep it, you should have printed it to pdf and saved it on your PC, with a name including the date of the article, to allow more easily deleting outdated stuff.
Blogs will be different? Social networks will be different? Forget it. Some time ago I checked (not “read”…) a book giving an overview of some 1000 social networks (Social Networks Around The World, 2008, An De Jonghe). The majority of them do not exist anymore today. And although the book is only a few years old, it is hopelessly outdated.
The Internet has been, is, and will be “temporary”, “perishable”, today there, tomorrow gone. And the time between “there” and “gone” might be several years or a few years, or a few weeks or days, but until today nothing on it has ever been “permanent”. Nothing. And next week Wednesday on the Internet might be a completely different world.
So, why should something stored in a slightly different way called “cloud” or “SaaS” suddenly be treated, interpreted, behave in a different way? Why should anybody coming up with a new service on the Internet be able to give you more, better or different guarantees compared to before?
I think what we need is a way to break the lock in of the hour. The data lock in. I’m not sure it’s a priority for enough people to become a reality, but here is how I think it could work.
Every user should provide their own storage on a P2P file system. You provide space from your computer and/or from buckets on S3 or another way. The fact you provide space means you have access to space on the P2P file system. You then can authorize services on the cloud to access this storage. The services can have copy of your data, but they are in a contract to maintain your storage up to date. Also the service has to store it’s internal copy of the data using a key they get from your storage, so as long as you authorize them they serve you. The day you revoke the authorization, they still have the data but can’t read it anymore. Also the best would be for the storage in the P2P file system to follow public specifications (that can be official standards).
An example use could be as follow. You have an account with GMail and one day you decide to switch to YMail for whichever reason. You revoke Google authorization and authorize Yahoo and your up and running. After some time Yahoo has a local copy of all your email, just as if you used their service forever (except the email address). This could also allow you to let 2 services access your storage and run different services.
Bandwidth is the biggest technical hurdle for this system, but the one big problem is that people don’t seem to care very much what happens to their data or they reject cloud computing as stupid since it doesn’t provide the control on their data. I on the other hand think we have to get an Open Data similar to the Open Source movement started to break what is this new lock in and control how the cloud computing will develop. We have to influence how cloud computing develops because the only certainty is that it will grow.
I see two main problems with subscribing to any online services (not just those that fall under the “cloud” umbrella):
The first is the lock-in and obsolescence risk that you are also worried about. It’s for that reason that I only see the two extremes as the viable options – either go with someone with a player that has enough critical mass that there’s very little risk of them going out of business (e.g., Google or Microsoft), or do it yourself by spending your own time and money on hosting your own services. And the latter option is only available for those with the luxury to do so. But I don’t see the middle ground as an option – for example, I’d be too worried about losing my email hosting provider if they were a mom-and-pop shop, much as I’d prefer to support the underdogs.
The second problem I have is privacy issues. I don’t see how any company run by sane people would rely on cloud services from the likes of Google and Microsoft when so much of their data is confidential. It’s especially hard to swallow the idea of entrusting your company’s data to large corporations who may potentially consider you to be their competitor (maybe not today, but perhaps sometime down the road when they set their sights on your market segment). Sure, they all have privacy agreements, but will they really hold-up in court, especially when faced with their armies of better-paid lawyers.
Thanks for all the articles you write, I greatly enjoy reading them.
A minor nit, the spelling of Caltech is with a lowercase T.
http://styleguide.caltech.edu/wordmark
“””
2. The name Caltech must never be spelled with an upper-case “T” in the midst of other lower-case letters. This usage is wrong.
“””
Faith-based computing is putting the floppy with the term paper that will decide my semister grade into the school computer lab PC back in 1993 and pray that it will print without destroying my file. The last 3 years that I’ve given up MS Office and spent with Google Apps have earn much of my faith. Even today desktop software crashes happen. Good luck with your data if you didn’t save often. Web apps generally auto save for you every minute. The infrastructure underlying any industrial revolution just have to catch up, but to say cloud computing is stupid is like saying since is electricity challenged they should just forget about using computers entirely. A case in point: when my office building suffered a power outage recently, while my co-workers who depended on their desktop PCs were idled, I kept working on my laptop, and could keep going indefinitely at the coffee shop down the street. The cloud-based resources I needed for the distributed web app I’m developing was not an issue with WiFi. Sure, the IDE I was using is a local install, but I could hand code in a simple online text editor if I needed, and even if WiFi was cut off, Google Apps would’ve kept working on my laptop with Gears. Cloud computing as a concept is not even new. Just read some old sci-fi stories. You might say the coffee shop is the perfect model for the future office. You’ve got juice for your machine, the web, and your brain. And if the whole neighborhood goes down, just drive to the next OK spot. To quote Cringely – “Me Mobile, You Kaput”.
I see this argument similar to the issue of Windows and viruses and hacks, had we stayed with DOS we would be less vulnerable but we didn’t / wouldn’t / couldn’t. The value of Windows is that everything interoperates in a smooth way. The value of this functionality over DOS makes us forget that it has a price in the form of security issues.
The issues with cloud computing will be similar. The change will happen because the increased functionality / features make it attractive. The older generations (including me) will fight it for a while and keep our data and backups but the newer generations will use it and accept the faults because it is what they know.
Maybe this opens up space for a new service: storing information in a raw format for use in on-line applications. For example, rather than uploading photos to Facebook one would have those photos hosted either “at home” (name your web-enabled place) or with a trusted service (something right up Google’s alley). Facebook would store photo-tagging associations as those are within the context of their site. It is analogous to the semantic web: data is separated from presentation.
Granted, hosting with Google leaves open the possibility that they go “poof” along with your stuff, and centrally hosting your photos/code/etc., for use on multiple sites means an outage at the hosting service takes down from all those sites your content. I guess we have to decide how much responsibility we want for own content, which would no doubt be based on its quality or mission-critical nature.
AParticipating organizations would store compressed and encrypted versions of their data with the Internet Archive, where they would be held in an inactive state but updated frequently. Then, in the event of that outfit’s death, the digital envelope would be opened, revealing a decryption key and enough application code to get a vanilla version of the original net app up and running. It would, as archives are intended to do, preserve the final state of the application as well as its final data.
A very interesting idea; however, it will not work. Ask anyone responsible for disaster recovery. Irrespective of getting someone to store their source code (or some vanilla version) somewhere, who is going to test it? And it must be tested. Every time a machine changes, a software version is upgraded, for every fix that is put on a system you are potentially making your archived code unrunnable.
I don’t have another solution but I think in order to retrieve / delete data from a cloud environment, the data storage method must be the standard across all the applications. So that any app can access / process it when provided with the correct keys.
A couple of years ago I went through an end-of-life experience with the yahoo photos service. The site was closing down and they let me decide whether I wanted them to move all my pictures on yahoo photos, to shutterfly (default option, I think) and/or send me a DVD of the pictures by post. I chose to go with both. They moved my pictures to shutterfly and the DVD arrived in the mail within 10 days. All in all, it was a painless experience, but it goes to illustrate that such EOL events with hosted services are already happening, and happening probably more frequently than one might think.
Yeah, I think that all sites that holds user created content, shuld offer that data (with code) at for an example Internet Archive. Actually I think it should be a law that requires these types of companies to do so.
Just my 2 cents.
I think what we’ll see in the future is data banks, just as we store our cash in banks today for storage and easy access. You’ll keep all your application data in a databank online that you control through the databank company’s interface, and online stores and social networks and others will simply access your databank for your data everytime you load up an app. The latency between the app servers and your databank will be trivial as they’re all backbone connected, and this way you can control your data, keeping it forever and controlling access/privacy. For large datasets, the databank will provide standardized and carefully controlled VMs where the app providers can crunch data while conserving privacy. The current data model is akin to the early days of commerce when banks didn’t exist, databanks will emerge just like banks did.
[…] here to see the original: I, Cringely » Blog Archive » Who Ya Gonna Call? App Busters … No TweetBacks yet. (Be the first to Tweet this post)SHARETHIS.addEntry({ title: “I, Cringely » […]
Many years ago when I first joined the workforce, a coworker was locked up in a conference room for 6 months with the assignment to xerox a copy of every document on a particular product. It seems someone was suing the company and the lawyers wanted every tidbit of information the company had.
In one memo out of 1000’s someone made a joke about the product. The plaintiff’s lawyers took the comment as an admission of the company knowing there was something wrong with the product. That joke cost the company a ton of time and money in proving the comment was only a joke and without any basis in fact.
That led to a new company policy — document retention rules. If there is not a legal or business reason to keep something, get rid of it after 6 months. Twice a year everyone in the company was required to stop working for a couple hours and go through every one of their files, and throw away what was no longer needed.
A few years ago, while working for a different company a coworker calls me with a concern. He is working with a customer who has some rather unusual email retention rules. No emails are saved for more than 3 days. If you were on vacation, tough, its gone. All emails are deleted after being opened. The customer wanted us to make sure none of the deleted emails could be recovered from the unused bits on the server disk drives. They wanted to to set up a disk-wipe feature on their email system. That company happened to be a big Houston based energy trading firm that got into a lot of trouble several years ago. My coworker friend, who never worked for this company found himself as a material witness and was locked up with the lawyers for years. Thanks to this firm we have the Sarbanes Oxley Act and corporations can never delete their email.
If you in-source your email, what if Exchange version 21 can’t read version 5 databases when the lawyers come calling? What if someone changed your email system, how are you going to read those old HP OpenMail messages? If you outsource your email, are you sure your service provider will keep your email forever? (and keep you out of Sarbanes Oxley jail) What if they don’t? What if they don’t do a good job of protecting your email and someone exposes something sensitive? What if you are a private citizen, do you want someone keeping your email forever?
The permutations of this problem boggle the imagination.
Heh, speaking of stalking, I remember an old VAX script that had the inventory of public serial terminals indexed by port, so that you could put in your friends’ VAX userids and get their physical location if they were logged into a campus VT terminal.. I ended up writing a ‘stalker’ script which took that index and basically reversed the hash so that keys became values and vice-versa, though VAX batch scripting didn’t really have a paradigm for it… So it became “select the lab and console, and I’ll tell you who’s logged in there”… .plans, talkd, all that stuff reinvented for teh internets..
(get off my lawn!)
Come on Bob! This is old news. Richard Stallman raised the alarm bells in September 2008 already. Checkout this article from The Guardian.
Come on Bob! You can do better than this. It’s old news because Richard Stallman raised the alarm bells about cloud computing back in September 2008 already.
[…] Cringely wrote an interesting article about what to do if a web-based service (you may also hear of it described as cloud computing or SaaS) sto…. What happens to your data? The article argues that once you licence most software, so long as you […]
The comments referencing data escrow have caught my eye. I work for an ISV called RainStor that launched a cloud archive service a few months ago. We’ve focused on delivering application retirement solutions initially that allow companies to preserve historical data from legacy applications in the cloud. However, we’ve also been asked by customers, partners, commentators, analysts etc. etc. how our cloud archive service can be used for “SaaS data escrow”. There seems a real and immediate need!
We’re keen to understand in more detail why and how companies might use cloud archive services to keep a copy of the data within their SaaS applications so we’re running a survey. The survey is available at http://tinyurl.com/kl5l86 and the results are available to anyone who particpates.
Not only are there companies still running Windows98 for critical applications. There are many retail stores still using OS/2 to control their air handlers. Could you imagine the costs of upgrading the software and hardware in a national retail chain of 300 or more locations? Not to mention the costs of retraining the service providers. All so the temperature can be maintained. If it ain’t broken, don’t fix it. Hell, I still use an old server running 4 Pentium pro processors as my firewall and print server. (I found it next to a dumpster 8 years ago) I’d rather save my money than throw it away on beveled windows, widgets, and a system that looks prettier. As for cloud computing; I’ll be the last one on earth to still keep my files and software where it will be secure.
Why NUVIGIL builds on the success of Modafinil…
General Information on Nuvigil and Alertec I think many of us have already experienced conditions of recurring sleepiness during the day……
My family and i arrived here since this particular post was tweeted by a guy I had been following and i’m delighted I made it here.
Could I ask if you happen to be from Australia? You actually seem like an Aussie 🙂
Only a super quick comment to advise you that your weblog isn’t showing correctly from my MAC when Im surfing with iCab – i thought you might want to find out. It appears to be to be Ok on Internet explorer though
I think this is a real great blog post.Much thanks again. Cool.
great thanks man…
I’d have to settle with you here. Which is not something I usually do! I enjoy reading a post that will make people think. Also, thanks for allowing me to speak my mind!
Super put up we have to unfold the word about this website.It’s best to have extra curiosity .
Extremely interesting. I like the way you write. Do you currently have an RSS feed?
This website seriously keeps improving every time I see it. You should certainly be happy.
Apple now has Rhapsody as an app, which is a great start, but it is currently hampered by the inability to store locally on your iPod, and has a dismal 64kbps bit rate. If this changes, then it will somewhat negate this advantage for the Zune, but the 10 songs per month will still be a big plus in Zune Pass’ favor.
This is an awesome site, I will definitely be adding you to my bookmarks 🙂
Do you care if I reference some of this on my website if I post a backlink to this site?
That helped me a lot! Exactly what I was looking for. Thanks again!
Thank you for taking the time to publish this information very useful!I’m still waiting for some interesting thoughts from your side in your next post thanks.
I must admit that this is one great insight. It surely gives a company the opportunity to get in on the ground floor and really take part in creating something special and tailored to their needs.
iPods (classic & touch), the Ibiza Rhapsody, etc. But, the last few years I’ve settled down to one line of players. Why? Because I was happy to discover how well-designed andcheap shirt wholesale fun to use the underappreciated (and widely mocked) Zunes are.
Your article simplymake sense. I’m gratified by the way you brought out ideas.
Thanks for the post man, it’s really nice to read this blog, in favorites now:P
No, you call a copyright lawyer. Why? Because it’s not nice to steal people’s copyrighted work and then reuse it, especially if you don’t even credit the photographer.
https://www.flickr.com/photos/sony_boy/96899726/
Where would you like me to mail the invoice?
Good stuff you have here, I was going to mention this to a good friend of my
Thanks for the post man, it’s really nice to read this blog, in favorites now..
I’ve read a few good stuff here. Certainly worth bookmarking for revisiting. I surprise how so much attempt you place to create this sort of fantastic informative website.