On Thu, 2010-07-22 at 18:58 -0700, Les wrote: > On Thu, 2010-07-22 at 09:25 +0200, birger wrote: > > But if it works and it means > > we (the back-end admins) can continue to override the users wishes and > > provide what they need instead of what they ask for, I'm all for it > > (like when DBA's come with very specific orders detailing raid type and > > stripe width for their data and log volumes and we give them everything > > from our standardized raid 5 pools) > This is one of the issues. You have decided on what they need. You may > be right from the viewpoint of storage, but from the access point you > could be way off the mark. A database's access speed is directly > related to the disk locations of the various fields and structures. > True you have given them storage at minimal cost to you, but what have > you cost the customer? Do you even have a way of benchmarking the > search times you have impacted? Do you truly understand the DBA's tasks > and the amount of data they have to search? As long as I can see that they are never even getting close to hitting the disks, but are working 100% from the raid controller cache I am quite certain I don't have any negative impact on the DB performance. :-) As soon as I see that a particular volume is getting *real* I/O I can move it to faster storage. I can assure you that with DBA literature written for US-scale operations most DBA's in a small country like norway spec their needs as if the user base is bigger than the population... . . . > When you are moving data around, do you examine the lifetime of the > storage, or do you know the level of urgency when when that particular > piece of data is needed or how it should be accessed? These parameters > are the areas I have had to deal with in corporate settings when some IT > person decided I didn't know what I was doing, and was sure they knew > better. I have no direct experience with these tools yet, as the cost has been too high for me to justify them compared to just throwing more disk at the problem. :-) I agree with you that the implications of moving old data around would have to be discussed with the customers. The first use for automated migration of files to slower storage would be users home directories. For application data a careful study is required for each app, as it is impossible for the IT department to make the kind of guesses you mention. For now, I have very good results with just enabling deduplication on the home directories. At least it takes care of multiple copies of files. My hope would be that with a pressure on storage vendors to support cloud api's it would become easier to do this kind of migration without todays price tag. I can hope, can't i? > > > > For VM's this is great. More openness would mean that it would become > > more feasible to run multiple physical farms. One VMware farm > > (production servers, HA and/or FT support, etc), one farm based on free > > software for development and testing, perhaps one hyper-v if you have a > > volume agreement with m$ that makes this cheaper for your windows > > vm's... If I can have one console to manage them all, move vm's around > > and so on I would be very happy. > What happens in a parallel processing situation when you move the VM? > If the IP address changes, the tight binding of resources will be > disturbed, and that will result in a web search to find the new > location, rebuild the linkages, and what happens to the computation > while that is going on? I would expect downtime while migrating between different platforms. As in shut down the VM, migrate, boot. As long as you move within the internal cloud(s) your network infrastructure will hopefully let you keep the same IP and so on. Migrating to external cloud is more disruptive. Of course you can use NAT-like techniques to mask the fact that the service has moved. At some point in the future the network cloud service could do this automagically. For now, moving services between different internal service levels would be easier than crossing the border to the external cloud. If you run one 'cheap' virtualisation farm for development and initial testing, then clone to the production farm for acceptance testing and production downtime during migration shouldn't be a problem. > > For networking it seems a bit cloudy yet how this will work out. There > > are so many security implications. > > Precisely!! Not to mention the transfer of responsibility and > accountability. And the possibly biggest bummer is that responsibility and accountability suddenly become very vague terms. When services run in external clouds the internal IT department cannot anymore offer a complete responsibility. Customer beware. > > If I open up the possibility for my internal customers to host computing > > services in the external cloud, I would like to make sure everything > > they order has to be verified against company security policies. Those > > security policies will also need a rewrite to accommodate these new > > services. > > > Like private pipes, SVM services, encrypted RPC links etc??? Oh and my > personal favorite (retired military, you know) bureaucracy and red tape. Exactly. If you don't update your security policies rather quickly you could end up with nasty surprises where such things arent covered at all, so when some department head installs a new service in the external cloud on his/her own all kinds of holes could be opened into the infrastructure without it being a breach of policy at all. > > Conclusion? Cloud services are very interesting. > There is a common misconception that what you find interesting users > will love. I really doubt that that is a good basis for this drive to a > known bad technology with poor history. Interesting doesn't even mean that *i* will like it. :-D I'm a kind of careful optimist. I hope there is some progress in every new paradigm, but if 10% of the hype is true it is excellent. I expect more like 1% useful content once you pop the balloon and get rid of all that hot air... The challenge is to be able to tell early on what we will see as the end result. What will be the reality? And be able to explain that to management in very simple terms, yet still be more convincing than the trained salesdroids. > > The potential > > implications on interoperability within my own server room? That's the > > big one. Will it just add to the complexity, or is this so hyped up now > > that everybody will support new standards at a low level so we can > > actually simplify internal operations? Will it ever become what the hype > > promises? Nobody believes that, I think... > > > And now we introduce IP V6, soon to be followed with IPV6A or IPV7. > What happens then? IPv6 is the big worry right now. With management being pounded by microsoft salesdroids telling them they *must* upgrade to windows 7 and enable Direct Access, and how IPsec and NAP technologies make this a breeze to implement with no security implications. Right. Last time they said that was with OCS. Almost a year since installation, and Microsoft are still unable to get the MSN integration to work. That installation was also a breeze to implement according to the salesdroids. A slight modification to the above numbers. If a new hyped-up M$ technology actually delivers 1% of what was promised i will be shocked. > What costs will this pass on. If you put my systems in the cloud will > I be paid for their processing time? At what par value? If the IT department chooses to use external cloud services this is yet another issue. Hopefully not to be handled by the tech geeks. > I know that I am such a boor for pointing these things out, but without > boors like me, where would we really be? somewhere else. that's all i can say. -- birger -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines