Monday, December 29, 2008

Windows And Software Installation Automation In An Enterprise

by: Ivan Abramovskiy

IT department automation in an enterprise.

Recommended: for the heads of enterprises, IT departments, system administrators

This article will tell you:

How to quickly install Windows on all computers of an enterprise.

How to quickly upgrade software on all enterprise computers without losing any data.

How to automate all kinds of routine processes in an enterprise.

How to increase the productivity of system administrators in an enterprise.

Any company faces the problem of timely software update on all computers as well as the problem of the quick recovery and configuration of an employee's working environment. To solve this problem we offer our product: Almeza MultiSet.

A company before using MultiSet:

The software is installed/configured manually with the employee working with this computer being idle at this time.

Databases are updated, computers are configured, software is set up - all that is also done manually and requires the administrator to be present at every computer.

Every remote office requires system administrators to be employed

The IT staff has to be arranged to be on duty 24 hours a day.

Expenses on business trips made by technical specialists are necessary.

You get the following benefits after you deploy MultiSet in your company:

You will need only 1 administrator at any point on the network to update any amount of software on any number of computers. Note that the time needed to update software on all computers will be approximately equal to the time needed for one computer!

It is possible to quickly change the configuration on multiple or separate computers throughout the entire network. Note that the administrator has to be physically present at one computer only.

It is possible to quickly and safely reinstall WindowsXP without losing any current data!

It is possible to create a master disk for the standard automatic installation of a set of software on any number of computers.

It is possible to automatically install Windows together with drivers, service packs, any types of tools and applications.

It is possible to quickly update databases, configuration files on client computers.

Using MultiSet dramatically increases the effectiveness of using the office hours of the IT administration department.

Why particularly MultiSet?

Reliability. MultiSet reliably runs on any type of Windows operating system.

Quickness. Due to its innovational algorithm, MultiSet performs its functions fast, exactly and, which is most important, reliably.

Trustworthiness.. Among our clients there are banks, government organizations, customer support services, large corporations, which proves the actual usefulness of MultiSet in practice.

Flexible pricing policy. We offer a very flexible pricing policy that allows you to cover a large number of computers at minimum cost.

Low system requirements. MultiSet has minimum system requirement for its work. MultiSet supports the following operating systems: Windows Vista/XP/2003/2000 (32-bit) and Windows Vista/XP/2003 (64-bit).

SOLUTIONS FOR SMALL BUSINESS

Small organizations often do not have enough funds to employ an administrator and it often happens that regular employees have to update and configure software.

MultiSet will allow you to reduce expenses on calling administrators for every client computer, save a lot of time for everyone and therefore provide more time for employees to spend on their direct duties.

Download Free 30-day Trial:
http://www.almeza.com/download/multiset.exe

Web:
http://www.almeza.com

Read more...

Mentoring In IT

by: Barry Koplowitz

This article is also available as a "The Sniffer Guy" podcast on iTunes.

ATTENTION AMERICAN IT MANAGERS: Within the next decade most of your best people will retire or die. Your senior staffers are baby boomers with twenty years or more of experience in their field. They built the systems: they learned the operating systems as they were created: they know what they know from real life experience that cannot be learned in school. They are also somewhere between their late forties and early sixties. They rose to the top while competing within the largest workforce America has ever seen. When they leave they will take a level of efficiency and expertise with them that will take twenties years to replace.

To make matters worse, the population of appropriately educated Americans coming up behind them is far smaller than the population getting ready to move out. Do the math. Start now. To try to buy that talent later will not only cost you a fortune but you will be competing for a very small population of such individuals, with the entire world.

In this corporate environment where everyone is disposable and so much work is done by contractors so that companies can avoid having to make a commitment to personnel, it is far too easy to miss this growing danger. Business Managers may have learned this—but probably not. However, IT managers usually have not had much exposure to this concept. They live in a world of projects that staff up for the project and then disband. How to you bring up the next crop of leaders in such an environment? This is going to take far too many companies by surprise! But IT is going to be hit much harder than most other departments. I know of no other area of corporate life that is so project oriented. In the IT world, you build a team and disband it a few months later--even when they do outstanding work. All in the name of avoiding long term cost. It also avoids long term success.

There is also an emotional and psychological component to this problem. After the Dot-Bomb debacle, many people with decades of smarts were kicked out due to layoffs, or companies failing, or being eaten by a bigger company that had its own staff. Why do we eat our seed corn? Those that survived are still concerned about it happening again. And it could. It makes them conservative. Possibly even a tiny bit timid and less likely to share their knowledge freely. I don't blame them. Because our corporate mentality is to cut the expensive people and replace them with contractors—that we can easily get rid of when the job is done. What a vote of no confidence! This is considered to be a strategy. OK…I will accept that. It is a strategy. But it is a very short sighted one. Where is the mid-range planning?

So, what do you do? In this article and podcast, let us restrict our focus to Mentoring. Future articles and podcasts will explore other activities that are also proven and available.

Do you have a mentoring plan in place? I don't mean the typical, "oh, we believe in mentoring around here" kind of plan. I mean a thought out purposeful plan whereby you determine which journeyman IT personnel have the potential to grow into those senior roles and have your baby boomer senior staffers truly mentor them to bring them along. I doubt it. It does exist; I know of a few such companies. But it is rare.

Part of the problem for those that want to create a mentoring program is that it is not so simple to identify candidates. Let me help with that. Not everyone is a candidate for mentoring and few people are cut out to be mentors. It's sad but true. Don't spin your wheels and exhaust your enthusiasm backing the wrong plan and/or individuals. You need to have some way to identify in whom you want to invest. And, please understand, it is an investment. You will invest money but not only money. You will invest the time of very busy and critical people. That will hurt a bit—but you don't really have any choice. If you are responsible for future planning in your organization, ignoring this process is irresponsible.

Here is a handy way to help make these determinations. A friend once told me that he had learned in a sales course at IBM, decades ago, about a concept that went something like this—and I may be mangling it so please forgive me. It was not meant for IT or Mentorship purposes, but I have adapted it.

There are four levels of competence. They are listed in order from least capable to most capable in performing their job. Oddly, this does not represent the order in which they are most effective in a mentoring program.

- Unconsciously Incompetent
- Consciously Incompetent
- Consciously Competent
- Unconsciously Competent

UNCONSCIOUSLY INCOMPETENT: This person doesn't know that they don't know. They are not a candidate for this program—but may need help in learning to learn.

A famous story about Thomas Edison says that he used to test fresh new Engineers who wanted to work for him by putting them in a lab with a very unique and oddly shaped glass container. He would tell them to figure out the internal volume of the container. One time he watched a new graduate work out the problem by measuring all the diameters of the odd twists and turns of the glass and carefully making the calculations on his slide rule. When he presented the answer, Edison said, "You got the right answer, but I can't give you the job." The young man asked why and Edison responded by picking up the container, filling it with water and pouring into a graduated beaker, getting the answer in ten seconds. He said, "Son, I am glad you know the answer, but I'm afraid you just don't know the question." The Unconsciously Incompetent person does not know the question.

CONSCIOUSLY INCOMPETENT: This person knows that they don't know and is probably working to get better. They are a junior person with potential. Such an individual bears watching—and possibly a little testing. Don't make it something too hard, but it should be a little scary, something that makes them stretch. See what happens. This is a good candidate to groom for middle management and in future years, senior management.

CONSCIOUSLY COMPENTENT: This is where the high performers stand. They will be in middle to senior management already. These people are two-for-one sales, all by themselves. They are both someone to be seen as a candidate to RECEIVE mentoring—for senior management—and the ideal person to PROVIDE mentoring for the Consciously Incompetent candidate. They have a high level of skill and consistently perform very well.

This person knows what they are doing, and remembers learning how to do it. They are not as capable as the Unconsciously Competent person. Nevertheless, they know what they know and they know how to transfer it to someone else--if they are motivated and are not afraid of losing their own place. If they know that they are part of something stable and long term and can afford to create their replacement—they are who you need. Because that is exactly what you want them to do. You want them to create their own replacement. You want them to bring up someone that will ask management for less, has a longer run in front of them and to know that they are not committing financial suicide by doing so.

Not all people in this category will make good Mentors as communication skills and a desire to teach are critical components to performing well in the role. I know many individuals who are extremely skilled and have the sort of knowledge that is transferable—but who could never serve this role with someone successfully. You need to keep other variables in mind.

- Communication Skills
- A temperament that tends toward explaining what they are doing, rather than keeping things "close to their vest."
- Good people skills

The people that will make the best Mentors are already doing it. They are respected by their peers as someone that is very free with their knowledge. They are just informal about it as there is no real structure. Find those people and give them a mandate, the time and some guidance and they will do a wonderful job for you.

UNCONSCIOUSLY COMPENTENT: The highest level. This person doesn't even know why they are so good anymore. Everything is so effortless that it is unconscious. This is the best you can get and you may only meet a handful of people like this in your career. Don't touch this person! There are two very good reasons why.

1) They are not replaceable or reproducible. They really are unique. Give them whatever they want to keep them doing what they do and don't distract them!

2) The other reason to keep them away from a mentoring program is because they make terrible mentors. They have no idea how they are doing what they are doing. They just do it--better than anyone else. But, they can't teach what they themselves don't really understand. Treat them as the gift that they are and get out of their way. Additionally, the probable failure in their attempt at mentoring will mess with their confidence. You don't want that.

There is a lot written on mentoring techniques, so I will not belabor the point. You, the IT Managers, may not have the authority or sense of security to set up this sort of program. I understand. However, if you want to do it and you have the authority, it isn't really hard to begin. There is a lot of material already in publication about various approaches. This is not a new concept. Available resources will probably not be specifically IT management related, but you can apply their lessons. My goal in this article is not to present something you have never heard of before. Rather, it is to remind you of what you already know--and to demonstrate how critical it has become to use that information.

Projects are also an opportunity. If you allow less capable people to work with more capable people, or more accurately, tag along, relationships can be created. Make the project oriented nature of our industry, which is its greatest weakness in this regard, become a new strength.

Read more...

Complete Overview of Linux

by: Matthew Gebhardt

This article will discuss the differences between the Linux and Windows operating software’s; we discuss some of the pro’s and con’s of each system.

Let us first start out with a general overview of the Linux operating system. Linux at its most basic form is a computer kernel. The Kernel is the underlying computer code, used to communicate with hardware, and other system software, it also runs all of the basic functions of the computer.

The Linux Kernel is an operating system, which runs on a wide variety of hardware and for a variety of purposes. Linux is capable of running on devices as simple as a wrist watch, or a cell phone, but it can also run on a home computer using, for example Intel, or AMD processors, and its even capable of running on high end servers using Sun Sparc CPU’s or IBM power PC processors. Some Linux distro’s can only run one processor, while others can run many at once.

Common uses for Linux include that of a home desktop computing system, or more commonly for a server application, such as use as a web server, or mail server. You can even use Linux as a dedicated firewall to help protect other machines that are on the same network.

A programmer student named Linus Torvalds first made Linux as a variant of the Unix operating system in 1991. Linus Torvalds made Linux open source with the GNU (GPL) (General Public License), so other programmers could download the source code free of charge and alter it any way they see fit. Thousands of coders throughout the world began downloading and altering the source code of Linux, applying patches, and bug fixes, and other improvements, to make the OS better and better. Over the years Linux has gone from a simple text based clone of Unix, to a powerful operating software, with full-featured desktop environments, and unprecedented portability, and a variety of uses. Most of the original Unix code has also been gradually written out of Linux over the years.

As a result of Linux being open source software, there is no one version of Linux; instead there are many different versions or distributions of Linux, that are suited for a variety of different users and task. Some Distributions of Linux include Gentoo, and Slackware, which due to the lack of a complete graphical environment is best, suited for Linux experts, programmers, and other users that know their way around a command prompt. Distributions that lack a graphical environment are best suited for older computers lacking the processing power necessary to process graphics, or for computers performing processor intensive task, where it is desirable to have all of the system resources focused on the task at hand, rather than wasting resources by processing graphics. Other Linux distributions aim at making the computing experience as easy as possible. Distributions such as Ubuntu, or Linspire make Linux far easier to use, by offering full-featured graphical environments that help eliminate the need for a command prompt. Of course the downside of ease of use is less configurability, and wasted system resources on graphics processing. Other distributions such as Suse try to find a common ground between ease of use and configurability.

“Linux has two parts, they include the Kernel mentioned previously, and in most circumstances it will also include a graphical user interface, which runs atop the Kernel” reference #3. In most cases the user will communicate with the computer via the graphical user interface.

(ref #6) Some of the more common graphical environments that can run on Linux include the following. The KDE GUI (Graphical user interface). Matthias Ettrich developed KDE in 1996. He wanted a GUI for the Unix desktop that would make all of the applications look and feel alike. He also wanted a desktop environment for Unix that would be easier to use than the ones available at the time. KDE is a free open source project, with millions of coders working on it throughout the world, but it also has some commercial support from companies such as Novell, Troltech, and Mandriva. KDE aims to make an easy to use desktop environment without sacrificing configurability. Windows users might note that KDE has a similar look to Windows. Another popular GUI is (ref #7) GNOME. GNOME puts a heavy emphasis on simplicity, and user ability. Much like KDE GNOME is open source and is free to download. One notable feature of GNOME is the fact that it supports many different languages; GNOME supports over 100 different languages. Gnome is license under the LGPL license (lesser general public license). The license allows applications written for GNOME to use a much wider set of licenses, including some commercial applications. The name GNOME stands for GNU Network object model environment. GNOME’s look and feel is similar to that of other desktop environments. Fluxbox is another example of a Linux GUI. With less of an emphasis on ease of use and eye candy, Fluxbox aims to be a very lightweight, and a more efficient user of system resources. The interface has only a taskbar and a menu bar, which is accessed by right clicking over the desktop. Fluxbox is most popular for use with older computers that have a limited abundance of system resources.

Although most Linux distributions offer a graphical environment, to simplify the user experience, they all also offer a way for more technically involved users to directly communicate with the Kernel via a shell or command line. The command line allows you to run the computer without a GUI, by executing commands from a text-based interface. An advantage of using the command prompt is it uses less system resources and enables your computer to focus more of its energy on the task at hand. Examples of commands include the cd command for changing your directory, or the halt command for shutting down your system, or the reboot command for restarting the computer ect.

Now that we are more familiar with the Linux operating system, we can note the many ways in which Linux differs from the worlds most popular OS, Microsoft Windows. From this point forward we will discuss some of the more prominent ways in which Linux deferrers from Windows.

For starters there is only one company that releases a Windows operating system, and that company is Microsoft. All versions of Windows, weather Windows XP Home, Business, or Vista, all updates, security patches, and service patches for Windows comes from Microsoft. With Linux on the other hand there is not one company that releases it. Linux has millions of coders and companies throughout the world, volunteering their time to work on patches, updates, newer versions, and software applications. Although some companies, charge for TECH support, and others charge for their distribution of Linux, by packaging it with non-free software, you will always be able to get the Linux Kernel for free, and you can get full-featured Linux desktops with all the necessary applications for general use, for free as well. The vendors that charge money for their distribution of Linux are also required to release a free version in order to comply with the GPL License agreement. With Microsoft Windows on the other hand you have to pay Microsoft for the software, and you will also have to pay for most of the applications that you will use.

Windows and Linux also differ on TECH support issues. Windows is backed by the Microsoft Corporation, which means that if you have an issue with any of their products the company should resolve it. For example if Microsoft Windows is not working right, then you should be able to call Microsoft and make use of their TECH support to fix the issue. TECH support is usually included with the purchase of the product for a certain amount of time, maybe a two year period, and from there on you may be charged for the service. Although IBM backs their Linux products, for the most part if you use Linux you are on your own. If you have a problem with Ubuntu Linux you cannot call Ubuntu and expect any help. Despite the lack of professional help, you can however receive good TECH advice, from the thousands or millions of Linux forums that are on the web. You ca also get great help from social networking sites such as Myspace, by posting questions in the many Linux groups. You can usually receive responses for your questions in a matter of hours form many qualified people.

Configurability is another key difference between the two operating software’s. Although Windows offers its control panel to help users configure the computer to their liking, it does not match the configuring options that Linux provides especially if you are a real TECH savvy user. In Linux the Kernel is open source, so if you have the know how, you can modify it in virtually any way that you see fit. Also Linux offers a variety of Graphical environments to further suit your needs. As mentioned earlier Linux is capable of running full-featured graphical environments like KDE, or more lightweight and resource friendly GUI’s like Fluxbox, or Blackbox, to suit users with older computers. There are also versions of Linux that are designed to emulate the Windows look and feel as closely as possible. Distributions such as Linspire are best suited for users that are migrating over from the Windows world. There are also distributions that include no graphical environment at all to better suit users that need to squeeze out all of the computing power that they can get for various computing activities, and for users that are more advanced than others. All of this configurability can be problematic sometimes, as you will have to make a decision on which desktop is right for you, and to make things easier on yourself you will need to only install applications that are native to your distribution and graphical environment.

(ref #1) The cost effectiveness of Linux is another way it separates itself from Windows. For home use Linux is cheap and in most cases completely free, while Windows varies in cost depending on which version you buy. With Linux most of the applications will also be free, however for Windows in the majority of cases you are suppose to pay for the applications. For most cases, with Linux there is no need to enter a product activation key when performing an installation, you are free to install it on as many computers as you’d like. With Windows you are only allowed to install it on one computer and Microsoft uses product activation software to enforce this rule. When installing Window’s you must enter a product activation key, which will expire after so many uses. If you wish too, you can purchase Linux from a variety of vendors, which will include a boxed set of CDs, Manuals, and TECH support for around 40-130$. Of course If you purchase a high-end version of Linux used for servers it may cost any where from 400$- 2000$. “In 2002 computer world magazine quoted the chief technology architect at Merrill Lynch in New York, as saying “the cost of running Linux is typically a tenth of the cost of running Unix or Windows alternatively.” (ref#1)

(ref #1) Installation of Windows is generally easier, than installing Linux. “With Windows XP there are three main ways to install. There is a clean install, in which you install Windows on a blank hard drive. There is also an upgrade install, in which you start with an older version of Windows and “upgrade” to a newer one. An advantage of upgrading is that all of the files on the older system should remain intact throughout the process. You can also perform a repair install, in which case you are installing the same version of Windows on top of itself in order to fix a damaged version of Windows. There is also a recovery, which Technically is not an install; it is used to restore a copy of Windows back to its factory settings. The disadvantage of recovering Windows is the fact that you will loose all of your data, which resides on the damaged copy of Windows.” (ref#1) Also with Windows you can rest assured that your hardware will most likely be supported by the operating software, although this is not much of a problem with Linux you cant be sure if Linux will support all of your hardware. With Linux installation varies greatly from Distro to Distro. You may be presented with a graphical installer or it may be a text-based installer, these variations make Linux a bit more difficult and unpredictable to install than is Windows, (although the difficulty is disappearing). You may perform a clean install of Linux or dual boot it, to co-exist with another operation software. With Linux rather than having to buy an upgrade Cd, you can install updates by downloading and then installing them while your desktop is running. With Linux it is also not necessary to reboot your computer after most upgrades, It is only necessary to reboot after an upgrade to the kernel. It is also possible to run Linux without ever needing to install it on a hard drive; there are many distributions of Linux that will allow you to run it straight off of a live cd. The advantage of this is that you do not need to alter your system in order to

try Linux. You can run Linux off of the CD so you do not have to damage your Windows partition. Other advantages include the ability to rescue a broken Linux system. If your Linux computer will not boot, then you may insert a live cd and boot off it, so you can repair the damaged version of Linux. Also you may use a Linux live cd to recover files from a damaged Windows computer that will no longer boot up. Since Linux is capable of reading NTFS files you may copy files form a Windows computer to a USB flash drive or floppy drive ect.

Another major difference between Linux and Windows is the applications that you will use with either OS. Windows includes a much wider abundance of commercially backed applications than does Linux. It is much easier to find the software that you are looking for with Windows than it is with Linux, because so many software vendors make their products compatible with Windows only. With Linux you will for the most part be forced to let go of the familiar applications that you have grown accustomed to with Windows, in favor of lesser-known open source apps that are made for Linux. Applications such as Microsoft office, Outlook, Internet Explorer, Adobe Creative suite, and chat clients such as MSN messenger, do not work natively with Linux. Although with Linux you can get Microsoft office and Adobe creative suite to work using software from codeWeavers called cross Over Office. Instead of using these applications you will need to use Linux apps such as open office, The Gimp Image Editor, The ThunderBird email client, Instead of the MSN messenger you can use the GAIM messenger, and you can use Firefox as your web browser. Also with Linux it can be difficult to install software even if it is made for Linux. This is due to the fact that Linux has so many different versions. Software that is made to install on one version probably will require some configuration in order to install on another version. An example would be if you were trying to install software that was made for the KDE graphical environment, on the GNOME GUI, This app would not easily install on the GNOME GUI, and would require some configuring on your part to successfully install it.

The type of hard ware that Linux and windows runs on also causes them to differ. Linux will run on many different hardware platforms, from Intel and AMD chips, to computers running IBM power Pc processors. Linux will run on the slowest 386 machines to the biggest mainframes on the planet, newer versions of Windows will not run on the same amount of hardware as Linux. Linux can even be configured to run on apples, Ipod’s, or smart phones. A disadvantage of Linux is when it comes to using hardware devices such as Printers, Scanners, or Digital camera’s. Where as the driver software for these devices will often be easily available for Windows, with Linux you are for the most part left on your own to find drivers for these devices. Most Linux users will find comfort in the fact that drivers for the latest hardware are constantly being written by coders throughout the world and are usually very quickly made available.

(ref #1) One of the most notable differences between the two operating software’s is Windows legendary problems with malicious code, known as Viruses and Spy ware. Viruses, Spy-ware and a general lack of security are the biggest problems facing the Windows community. Under Windows Viruses and Spy-ware have the ability to execute themselves with little or no input from the user. This makes guarding against them a constant concern for any Windows user. Windows users are forced to employ third party anti virus software to help limit the possibility of the computer being rendered useless by malicious code. Anti virus software often has the negative side effect of hogging system resources, thus slowing down your entire computer, also most anti virus software requires that you pay a subscription service, and that you constantly download updates in order to stay ahead of the intruders. With Linux on the other hand problems with viruses are practically non-existent, and in reality you do not even need virus protection for your Linux machine. One reason why Viruses and Spy-ware are not a problem for Linux is simply due to the fact that there are far fewer being made for Linux. A more important reason is that running a virus on a Linux machine is more difficult and requires a lot more input from the user. With Windows you may accidentally run and execute a virus, by opening an email attachment, or by double clicking on a file that contains malicious code. However with Linux a virus would need to run in the terminal, which requires the user to give the file execute permissions, and then open it in the terminal. And in order to cause any real damage to the system the user would have to log in as root, by typing a user name and password before running the virus. Foe example to run a virus that is embedded in an email attachment the user would have to, open the attachment, then save it, then right click the file and chose properties form the menu, in properties they can give it execute permissions, they would then be able to

open the file in the terminal to run the virus. And even then the user would only be able to damage his or her home folder, all other users data will be left untouched, and all root system files would also remain untouched, because Linux would require a root password to make changes to these files. The only way the user can damage the whole computer would be if he or she logged in as root user by providing the root user name and password to the terminal before running the virus. Unlike Windows in Linux an executable file cannot run automatically, It needs to be given execute permissions manually this significantly improves security. In Linux the only realistic reason you would need virus protection is if you share files with Windows users, and that is to protect them not you, so you are not to accidentally pass a virus to the Windows computer that you are sharing files with.

The above was a general over view of some differences between the Windows operating system, and Linux. To recap we started with the fact that Windows has only one vendor that releases the software, while Linux comes from millions of different coders throughout the world. We also commented on the fact that the Linux Kernel and much of the applications used with it are completely free of charge, where as with windows you are forced to pay for most of the software. Unlike Widows Linux is often lacking in professional Tech support, and Linux users are often left on their own to solve Technical issues. Linux users can either pay for Tech support or rely on the many Linux Forums and groups available on the Internet. Due to the fact that the kernel is open source, Linux has a huge advantage over Windows in configurability. You can configure Linux to run almost any way you see fit by manipulating the Kernel. Installing the Windows Operating software and applications is easier due to the fact that it has a universal installer. Also finding applications for Windows is easier because of its popularity most apps are available for Windows only, and are made easily available. Linux will run on a greater variety of hard ware than does Windows, from mainframe super computers running multiple IBM Power PC Chips, to a small laptop running an AMD processor. And of course the biggest difference in this writer’s opinion is the fact that Linux does not suffer from an onslaught of Viruses and other malicious code, unlike Windows which is plagued by countless number of malicious code that can easily destroy your system if not properly guarded against.

In conclusion we will conclude that the Linux OS really is the superior software. Other than a few minor nuisances, linux out performs Windows in most categories. The fact that Linux is more secure is the tipping point, that tilts the scales in the favor of Linux. Windows simply suffers from far to many security vulnerabilities for it to be considered the better over all desktop environment.

References

http://www.michaelhorowitz.com/Linux.vs.Windows.html Reference #1

http://www.theinquirer.net/en/inquirer/news/2004/10/27/linux-more-secure-than-windows-says-study Reference #2

http://www.linux.com/whatislinux/ reference number 3

http://www.linux.org/info/

Reference #4

http://en.wikipedia.org/wiki/Linux%5Fkernel Reference #5

http://en.wikipedia.org/wiki/KDE Reference #6

http://en.wikipedia.org/wiki/GNOME Reference #7

Read more...

Game Development - Story Bible Example

by: Sebastian Gross

The bible deals exclusively with story and its elements. While the design document guides the creation of the entire gaming experience, the bible controls the game’s interactive screenplay.

Log Line

Let’s say we’re working on a game titled “Hangnail,” the latest game inspired by Quake. Hangnail’s bible would include a “treatment” or synopsis of the game’s story. That treatment should include one- or two-sentence reviews of the story’s beginning, middle, and end. In some cases, the treatment could go into greater detail, stretching from one page to 20 or more, if the designer or game writer chose to really flesh out the story in the design stage. If the game’s narrative is truly based on cinematic story construction, the story might include first, second, and third act reviews. Leave those bits to your writer—we waste hours worrying about that act-structure nonsense. At the very least, the synopsis should include a “log line,” or a brief review of the game’s story, like this:

Hangnail:

Synopsis: A big, tough guy with heaps of muscles and a heart of gold walks through mazes and kills lots of stuff to battle evil, find his boxed lunch, and save the future of humanity…at least until the sequel comes out.

Characters

The second portion of the bible would include character reviews. The most important component of any effective narrative, whether it’s in a game, a movie, a TV show, or a novel, is good characters. They should have well-rounded histories and solid motivations. Most importantly, they should be clearly drawn out so anyone who reads the bible or works on the game sees the same person in their minds. If a writer or designer creates a game revolving around a Schwarzenegger-type action hero and fail to describe his all-American, psychopathic personality, the artist or renderer could end up drawing Marv Albert. Here’s what our character bible would say about Hangnail’s protagonist:

Character Name: Dirk Squarejaw

Age: Late 20s

Appearance: Ruggedly handsome and in the kind of impossibly good shape that you’d need to spend 25 hours a day in a gym to achieve.

Equipment: Death Ray of Death, Grenade of Severe Owies, Swiss Army Knife of Animosity, Pulse Cannon of Mild Mood Swings.

Attributes: Wonderfully and relentlessly violent. With an overdeveloped sense of honor. Dedicated to saving all life on Earth, or at least all attractive women on Earth. He enjoys painting in splattered blood, rainy days, long walks on the beach, thermonuclear devices, and backgammon.

Background: Orphaned at birth and raised by wolves, Dirk was rescued by nuns at the age of 4. The nuns instilled in the young Dirk his sense of honor and his bizarre obsession with backgammon. When the evil villain, General Payne, destroyed the nuns’ village to hijack all their dice, Dirk set out on his lifelong quest to end evil around the world. He will never rest until Payne is defeated, peace and justice restored, and double sixes rolled everywhere.

All the information in the character description above could be distilled into one long paragraph entry, if the designer chooses to limit the length or the scope of the bible. However, every character in the game (even supporting players) should be presented in this same detail.

Such enriching character sketches can provide inspiration when planning game maps or missions (depending on the game’s genre). For example, in Hangnail’s case, given Dirk’s devotion to backgammon, the designer could construct a maze or a level in which the objective is to slaughter all of General’s Payne’s agents to recover their ill-gotten dice.

Character description and background is one area where a story bible can really enrich an interactive game. If the bible can draw out a game’s central character with convincing depth and detail, the production can present an interesting and exciting person around which you can build a game and story.

In some cases, the player becomes that character. In other games, the player merely guides an already existing character. In either case, the story bible can outline what the main characters wants! That’s the key. The entire game story should be built around what the main character or hero wants and needs. Once that is pinpointed (be it the damsel in distress, a magic amulet, or the enemy capital), a designer can build an entire game around that quest. Battles in the cold reaches of space. Races through monster-filled mazes. Puzzle-solving through a haunted library. Anything that makes the game more entertaining can stand between the hero and the goal. But, the goal must be clear, ever-present, and motivated. The story bible can help a design team do that.

In another example, if Dirk was scared of water because his wolf parents couldn’t swim, the designer might wish to create an underwater level and cause Dirk’s air supply to disappear quickly because he hyperventilates too easily.

Using a methodology like this, in which you define the background, attributes, age, appearance, and equipment of a character, can help ensure truly motivated and enjoyable characters and gives the design team ideas for gameplay. A game’s characters need to be compelling. If the player becomes a hero in the game, that hero must be attractive enough that the player wants to assume that persona. A game villain should be rotten enough that the player generates genuine passion and satisfaction from defeating him or her.

An essential rule of thumb states that every character, even the most incredibly butch of heroes, needs to have weaknesses or shortcomings. If a character seems too omnipotent and has every skill imaginable down pat, no player will believe he or she could possibly lose or die. You don’t have to make your hero or heroine a simpering wimp, but don’t make them invulnerable. Even Superman has his kryptonite.

In the final document, Dirk’s bible entry might include an artist’s sketch (if created early in the game development process) or a 3D rendering (if created farther along in the development process) which might also be the actual avatar used in the game if the product makes it that far along.

To digress for just a moment, I have approached the use of game bibles for story development solely from the perspective of the hero thus far. Lately, games such as Bullfrog’s Dungeon Keeper and LucasArts’ Dark Forces II have made it possible for players to assume the role of the villain. However, that doesn’t turn the narrative rule on its ear—the same guidelines still apply. A villain also has wants and needs. In the best possible scenario, the bad guy wants exactly the same thing as the hero. In drama and writing courses, that’s called the “Law of Conflicting Need.” A good story (and therefore a good game, if it has story components) has a protagonist and an antagonist wanting the same thing for perfectly opposite reasons. We usually want the hero to get to that goal before the villain. However, in games where we become the villain, we assume the motivations of the villain. The bible should outline the history, personality, and motivation of the bad guy as well as the hero. That way, if we become the antagonist in gameplay, it works just as well if we had chosen the hero’s role.

Resource:
http://www.computer-game-design.com
Read more...

A History Into Microsoft Products

by: Mehmet Onatli

Microsoft Windows is the name of several families of software operating systems by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs). The most recent client version of Windows is Windows Vista. The current server version of Windows is Windows Server 2008.

Windows 1.0 (1985)

The first version of Windows provided a new software environment for developing and running applications that use bitmap displays and mouse pointing devices. Before Windows, PC users relied on the MS-DOS® method of typing commands at the C prompt (C:\). With Windows, users moved a mouse to point and click their way through tasks, such as starting applications.

In addition, Windows users could switch among several concurrently running applications. The product included a set of desktop applications, including the MS-DOS file management program, a calendar, card file, notepad, calculator, clock, and telecommunications programs, which helped users, manage day-to-day activities.

Windows 2.0 (1987)

Windows 2.0 took advantage of the improved processing speed of the Intel 286 processor, expanded memory, and inter-application communication capabilities made possible through Dynamic Data Exchange (DDE). With improved graphics support, users could now overlap windows, control screen layout, and use keyboard combinations to move rapidly through Windows operations. Many developers wrote their first Windows–based applications for this release.

Windows 3.0 (1990)

The third major release of the Windows platform from Microsoft offered improved performance, advanced graphics with 16 colors, and full support of the more powerful Intel 386 processor. A new wave of 386 PCs helped drive the popularity of Windows 3.0, which offered a wide range of useful features and capabilities, including:

Program Manager, File Manager, and Print Manager.

A completely rewritten application development environment.

An improved set of Windows icons.

Windows NT 3.1 (1993)

When Microsoft Windows NT® was released to manufacturing on July 27, 1993, Microsoft met an important milestone: the completion of a project begun in the late 1980s to build an advanced new operating system from scratch.

Windows NT was the first Windows operating system to combine support for high-end, client/server business applications with the industry's leading personal productivity applications.

Windows for Workgroups 3.11 (1993)

A superset of Windows 3.1, Windows for Workgroups 3.11 added peer-to-peer workgroup and domain networking support. For the first time, Windows–based PCs were network-aware and became an integral part of the emerging client/server computing evolution.

Windows for Workgroups was used in local area networks (LANs) and on standalone PCs and laptop computers. It added features of special interest to corporate users, such as centralized configuration and security, significantly improved support for Novell NetWare networks, and remote access service (RAS)

Windows NT Workstation 3.5 (1994)

The Windows NT Workstation 3.5 release provided the highest degree of protection yet for critical business applications and data. With support for the OpenGL graphics standard, this operating system helped power high-end applications for software development, engineering, financial analysis, scientific, and business-critical tasks.

Windows 95 (1995)

Windows 95 was the successor to the three existing general-purpose desktop operating systems from Microsoft—Windows 3.1, Windows for Workgroups, and MS-DOS. Windows 95 integrated a 32-bit TCP/IP (Transmission Control Protocol/Internet Protocol) stack for built-in Internet support, dial-up networking, and new Plug and Play capabilities that made it easy for users to install hardware and software.

The 32-bit operating system also offered enhanced multimedia capabilities, more powerful features for mobile computing, and integrated networking.

Windows NT Workstation 4.0 (1996)

This upgrade to the Microsoft business desktop operating system brought increased ease of use and simplified management, higher network throughput, and tools for developing and managing intranets. Windows NT Workstation 4.0 included the popular Windows 95 user interface yet provided improved networking support for easier and more secure access to the Internet and corporate intranets.

Windows 98 (1998)

Windows 98 was the upgrade from Windows 95. Described as an operating system that "Works Better, Plays Better," Windows 98 was the first version of Windows designed specifically for consumers.

With Windows 98, users could find information more easily on their PCs as well as the Internet. Other ease-of-use improvements included the ability to open and close applications more quickly, support for reading DVD discs, and support for universal serial bus (USB) devices

Windows 98 Second Edition (1999)

Windows 98 SE, as it was often abbreviated, was an incremental update to Windows 98. It offered consumers a variety of new and enhanced hardware compatibility and Internet-related features.

Windows 98 SE helped improve users' online experience with the Internet Explorer 5.0 browser technology and Microsoft Windows NetMeeting® 3.0 conferencing software. It also included Microsoft DirectX® API 6.1, which provided improved support for Windows multimedia, and offered home networking capabilities through Internet connection sharing (ICS)

Windows Millennium Edition (Windows Me) (2000)

Designed for home computer users, Windows Me offered consumers numerous music, video, and home networking enhancements and reliability improvements.

Windows Me was the last Microsoft operating system to be based on the Windows 95 code base. Microsoft announced that all future operating system products would be based on the Windows NT and Windows 2000 kernel.

Windows 2000 Professional (2000)

Windows 2000 added major improvements in reliability, ease of use, Internet compatibility, and support for mobile computing.

Among other improvements, Windows 2000 Professional simplified hardware installation by adding support for a wide variety of new Plug and Play hardware, including advanced networking and wireless products, USB devices, IEEE 1394 devices, and infrared devices.

Windows XP (2001)

With the release of Windows XP in October 2001, Microsoft merged its two Windows operating system lines for consumers and businesses, uniting them around the Windows 2000 code base.

With Windows XP, home users can work with and enjoy music, movies, messaging, and photos with their computer, while business users can work smarter and faster, thanks to new technical-support technology, a fresh user interface, and many other improvements that make it easier to use for a wide range of tasks

Windows VISTA

Windows Vista is a line of operating systems developed by Microsoft for use on personal computers, including home and business desktops, laptops, Tablet PCs, and media centers.

Windows Vista contains many changes and new features, including an updated graphical user interface and visual style dubbed Windows Aero, improved searching features, new multimedia creation tools such as Windows DVD Maker, and redesigned networking, audio, print, and display sub-systems
Read more...