It is an executable code able to reproduce itself. Viruses are an area of pure programming, and, unlike other computer programs, cany intellectual functions on protection from being found and destroyed. They have to fight for survival in complex conditions of conflicting computer systems. That's why they evolve as if they were alive.

Yes, viruses seem to be the only alive organisms in the computer environment, and yet another their main goal is survival. That is why they may have complex crypting/ decrypting engines, which is indeed a sort of a standard for computer viruses nowadays, in order to carry out processes of duplicating, adaptation and disguise

It is necessary to differentiate between reproducing programs and Trojan horses. Reproducing programs will not necessarily harm your system because they are aimed at producing as many copies (or somewhat-copies) of their own as possible by means of so-called agent programs or without their help. In the later case they are referred to as "worms".

Meanwhile Trojan horses are programs aimed at causing harm or damage to PC's. Certainly it's a usual practice, when they are part of "tech-organism", bat they have completely different functions.

That is an important point. Destructive actions are not an integral part of the virus by default. However virus-writers allow presence of destructive mechanisms as an active protection from finding and destroying their creatures, as well as a response to the attitude of society to viruses and their authors.

As you see, there are different types of viruses, and they have already been separated into classes and categories. For instance: dangerous, harmless, and very dangerous. No destruction means a harmless one, tricks with system halts means a dangerous one, and finally with a devastating destruction means a very dangerous virus.

But viruses are famous not only for their destructive actions, but also for their special effects, which are   almost   impossible   to   classify.   Some

- 28 -

virus-writers   suggest   the   following: funny, very funny and sad or melancholy (keeps silence and infects). But one should remember that special effects must occur only after a certain number of contaminations. Users should also be given a chance to restrict execution

of destructive actions, such as deleting files, formatting hard disks. Thereby virus can be considered to be a useful program, keeping a check on system changes and preventing any surprises such as of deletion of files or wiping out hard disks.

It sounds quite heretical to say such words about viruses, which are usually considered to be a disaster. The less person understands in programming and  virology, the greater influence will have on him possibility of being infected with a virus. Thus, let's consider creators of viruses as the best source.

Who writes computer viruses?

They are lone wolves or programmers groups.

In spite of the fact that a lot of people think, that to write a computer virus is a hardship, it is no exactly so. Using special programs called "Virus creators" even beginners in computer world can build their own viruses, which will be a strain of a certain major virus. This is precisely the case with notorious virus "Anna Cumikova",

which is actually a worm. The aim of creation of viruses in such way is pretty obvious: the author wants to become well known all over the world and to show his powers.

Somehow, the results of the attempt can be very sad (see a bit of history), only real professionals can go famous and stay uncaught. A good example is Dark Avenger. Yes, and it's yet another custom of participants of  "the scene" - to take terrifying monikers (nicknames).

To write sotttething really new and remarkable programmer should have some extra knowledge and skills, for example:

1) good strategic thinking and intuition - releasing a virus and its descendants live their own independent life in nearly unpredictable conditions. Therefore the author must anticipate a lot of things;

2) splendid knowledge Of language of the Assembler and the operating system he writes for - the more there are mistakes in the virus the quicker its will be caught;

3) attention to details and a skill to solve the most varied tactical questions - one won't write a compact, satisfactory working program without this abilities;

4) a high professional discipline in order to join preceding points together.

- 29 -

A computer virus group is an informal non-profit organisation, uniting programmers-authors of viruses regardless of their qualifications. Everyone can become a member of the club, if he creates viruses, studies them for the reason of creation and spreading.

The aims they pursue together may After from that of a single virus writer, although they usually also try to become as famous as possible. But in the same time they may render help to beginning programmers in the field of viruses and spread commented sources of viruses and virus algorithm descriptions.

One can't say that all of the group members write viruses in Assembler. Actually, you don't have to know any computer language or write any program code to become a member or a friend of the group. But programming in Assembler is preferred, Pascal, C++ and other high level languages are considered to be humiliating. It does make sense since programs compiled in Assembler are much smaller (0.5-5 kb) and therefore more robust. On the other hand Assembler is quite difficult to understand especially for beginners. One should think in the way computer does: all commands are send directly to the central processing unit of PC.

There are computer virus groups all over the world, few being more successful than others. It may be pretty hard to get in contact with them since they are quite typical representatives of computer underground world as well as (free)wares groups. Sometimes, however, creating viruses can become a respectable occupation, bringing

constant income. After all, no one but the author of the virus can bring valuable information on the way it should be treated and cured.





By Kris Fuller, National Instruments


In a very real sense, the Internet has changed the way we think about information and exchange of resources. Now engineers are using the Internet and software applications to remotely monitor and perform distributed execution of test and control applications. Such an approach reduces the time and cost involved in tests by sharing optoelectronics instrumentation and by distributing tasks to optimal locations.

A typical automated test and control system uses a computer to control positioning equipment and instrumentation. We'll use the term "remote control" to refer to the technique of enabling an outside computer to connect

- 30 -

to an experiment and control that experiment from a distance. Such an approach benefits engineers who need to monitor applications running in harsh environments that offer them limited access, or for tests whose long durations are impractical for continuous human monitoring.

In addition, remote control offers engineers the ability to change test parameters at certain intervals without traveling to the site or even running from their office into another area of the building. This convenience allows a test operator to view results and make test modifications from home on the weekend, for example. The user simply logs on to the network from home, connects to the application, and makes those changes just as though he or she were on site.

Control via Internet

To effectively control applications via the Internet, companies are developing software programs that champion remote execution. For instance, Lab VIEW (National Instruments; Austin, TX) allows users to configure many software applications for remote control through a common Web browser simply by pointing the browser to a Web page associated with the application. Without any additional programming, the remote user can access fully the user interface that appears in the browser. The acquisition still occurs on the host computer, but the remote user has complete control of the process and can view acquired data in real time. Other users also can point their browser to the same URL to view the test.

Windows XP makes it easier to control applications via the Internet. With this Microsoft OS, users now get Remote Desktop and Remote Assistance, which offer tools for debugging deployed systems. After a system is deployed in the field, it is often cost-prohibitive for the support staff to visit every site. With Remote Desktop, a support operator can log in to a remote Windows XP machine and work as if he or she were sitting at the desk where that machine is located. With Remote Assistance, the onsite operator can remain in control of the desktop but the support operator can view the desktop on his or her remote machine. At any time, the onsite operator can give up control of the desktop to the support operator and still monitor which troubleshooting techniques are in use. Industry-standard software development tools take advantage of these new features.

At times, it may be desirable to use the Web browser to initiate a measurement or automation application but not actually control the experiment."; In this case, the remote operator can log in, set certain parameters, and run the application over a common gateway interface (CGI). With CGI, the user communicates with a server-side program or script run by an HTTP server in response to an HTTP request from a Web

- 31 -

browser. This program normally builds HTML dynamically by accessing other data sources such as a database. As part of the HTTP request, the browser can send to the server the parameters to use in running the application.

Distributed Execution

In classical remote control, one person or machine at a time is charged with controlling the experiment. In distributed execution, however, a user can truly take advantage of the benefits of networking, extending control to an entire remote system connected on the same network. In this way, individual machines focus on specific functions, and each system is optimized to perform its chosen task. Because data can be shared among the distributed components and each component accomplishes a unique task, this network functions as a complete system. For instance, it is possible to dedicate certain machines for acquisition and control while relegating analysis and presentation to other systems. Technology makes it possible to remotely monitor, control, and even run diagnostics while the system itself is dedicated to running acquisition and control, introducing the ability to multitask.

Certain test and control applications require an embedded, reliable solution. For these applications, the user can download the software to a headless, embedded controller to connect and control remotely. The controller can be a single unit or a series of form factors (such as the FieldPoint module that is able to perform monitoring and control tasks in harsh environments). In either case, software runs on a real-time operating system, but it can be accessed from a host computer using an Ethernet connection.

For example, consider a structural test system measuring the vibration and harmonics of a bridge design. It is possible to set up one node with a camera to monitor the testing of the bridge, then set up another node to measure parameters such as temperature, humidity, and wind direction and speed. Finally, one can set up a node to measure the load, strain, and displacement on certain areas of the bridge. The system can send all the data back to a main computer that correlates the data, analyzes it, and displays the results of the test on a Web page.

Each of these nodes would need to be running autonomously, acquiring data and sending it onto other computers to correlate the data and create reports. With the right software and hardware, each measurement node becomes an embedded, reliable, and durable solution. The user could easily

control any of the measurement nodes to modify parameters of the test. In some systems, the origin of the test and the code is completed using a

- 32 -

Windows operating system and then downloaded to the measurement node. This enables the user to make major modifications to the test and download them to the embedded target without visiting the site.

Next, one of the live data-sharing techniques could be used to transfer the data to another cluster of computers that would correlate and analyze the data. Finally, an Internet server could allow project members to share the Web reports and analysis in geographically separated locations.