Re: Getting people into Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Les Mikesell wrote:
Tim wrote:
On Thu, 2007-01-04 at 10:49 -0600, Les Mikesell wrote:
What I've always wanted - and have been too lazy to work on myself - is
a method where any number of systems could be easily configured to
track the changes in any other system with a publish/subscribe scheme.

That is, one 'expert' would build and maintain a system optimized for a
particular type of use and his build tools would automatically package
his work for distibution - or he might just be assembling and assortment
of already available packages. In any case, without much extra work
it should be possible to make that exact set of packages selectable
by anyone else, in a way that updates and changes would follow the
master system automatically.  Currently this would take a hand-managed
yum repository for every instance and is so much work that it doesn't
happen.

Isn't Anaconda supposed to be able to provide for that sort of thing?
You'd still use the same repos, but just a different installation
script.

Yes, anconda does probe and understand the hardware differences
during an install, but it isn't involved with subsequent updates.
Kickstart can do an initial install of a matching set of packages
on different hardware, but then yum just updates the currently
installed packages. What I'd like to see is the ability to put
a large range of packages and package versions in the same
repositories and have an ongoing ability to track package and
package version changes to match a chosen master copy.  For
example, if the administrator of the master machine defers a
kernel update, pulls a few newer packages from the rawhide
repository, and installs postfix as an alternative to sendmail,
I'd like the tracking machines to offer to make the corresponding
changes on their next update run.


This practice is best handled in a 'diskless' or 'thin client' scenerio today. There really is no need for 'fat' clients, and has not been for many years.

Your point is valid, but experience is a hard teacher, and that teacher demands much more time per 'fat' client than for 'diskless' or 'thin' clients.

For example:

Years ago it was common to hire UNIX administrators based upon number of systems in use.

The common scenario was 25 different systems per admin, and up to 50 identical systems per admin.

Diskless systems were varied, but for Solaris, the common scenario was up to 160 systems per admin. My own experience was just over 100 diskless systems and their server. I worked on those maybe 20 hours a week once they were set up.

Diskless == local disk used for swap, cache, and localized data only.

In 1997, the IT industry had its best chance to move 'common user computing' to thin clients based on web services. There were lots of competitors, and several were quite promising.

Since you are spending weekends helping your parents and friends 'fix' issues on their computers, you know how the 'thin client' movement ended: badly.

Imagine your firm, and your parents and friends using an off the shelf device to surf the net and write the occasional document and read their email. That's about all 90 percent of current computer users do. They dont need a 'fat' computer to do that.

In any case, what you are suggesting is very much like what is done for 'diskless' clients.

Good luck!


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux