Some video games also use multimedia features. Multimedia applications that allow users to actively participate instead of just sitting by as passive recipients of information are called Interactive Multimedia. In the field of Arts there are multimedia artists, whose minds are able to blend techniques using different media that in some way incorporates interaction with the viewer. In Education, multimedia is used to produce computer-based training courses popularly called CBTs and reference books like encyclopedia and almanacs.
A CBT lets the user go through a series of presentations, text about a particular topic, and associated illustrations in various information formats. Edutainment is the combination of education with entertainment, especially multimedia entertainment. Learning theory in the past decade has expanded dramatically because of the introduction of multimedia.
Several lines of research have evolved e. Cognitive load, Multimedia learning, and the list goes on. The possibilities for learning and instruction are nearly endless. The idea of media convergence is also becoming a major factor in education, particularly higher education. Defined as separate technologies such as voice and telephony features , data and video that now share resources and interact with each other, synergistically creating new efficiencies, media convergence is rapidly changing the curriculum in universities all over the world.
Likewise, it is changing the availability, or lack thereof, of jobs requiring this savvy technological skill. Journalism Newspaper companies all over are also trying to embrace the new phenomenon by implementing its practices in their work. While some have been slow to come around, other major newspapers like The New York Times, USA Today and The Washington Post are setting the precedent for the positioning of the newspaper industry in a globalized world.
News reporting is not limited to traditional media outlets. Freelance journalists can make use of different new media to produce multimedia pieces for their news stories. Engineering Software engineers may use multimedia in Computer Simulations for anything from entertainment to training such as military or industrial training. Multimedia for software interfaces are often done as collaboration between creative professionals and software engineers. Industry In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and coworkers.
Multimedia is also helpful for providing employee training, advertising and selling products all over the world via virtually unlimited web-based technology Mathematical and scientific research In mathematical and scientific research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance.
Representative research can be found in journals such as the Journal of Multimedia. Medicine In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is affected by diseases spread by viruses and bacteria and then develop techniques to prevent it. While the progression of graphical user interfaces opened the way for a variety of multimedia applications such as, Document imaging, Image processing, Image recognition are intended for recognizing objects by analyzing their raster images.
In the subsequent sections, we will look at these applications and then present a view of generic multimedia applications. Organizations such as insurance agencies, law offices, country and state governments, including the department of defense, manage large volume documents. In fact, the Department of Defense DOD is among the early adopters of document image technology for applications ranging from military personal records to maintenance manuals and high speed printing systems. The source of interest in imaging is due to its workflow management and contribution to productivity.
Document imaging makes it possible to store, retrieve, and manipulate very large volume of drawings, documents, and other graphical representation of data. Imaging also provides important benefits in terms of electronic data interchange, such as in the case of sending large volume of engineering data about complex systems in electronic form rather than on paper. Imaging is already being used for a variety of applications. An application such as medical claims processing not only speeds payment to healthcare facilities, but cuts costs of reentering information from claim forms into a computer database.
OCR systems now automatically handle the task of data entry of key fields. The original is not altered in document image workflow management system rather, annotations are recorded and stored separately an image processing system, on the other hand, may actually alter the contents of the image itself. Let us briefly review the various aspects of image processing and image recognition. Image enhancement: Most image display systems provide some level of image enhancement. This may be a simple scanner sensitivity adjustment very much akin to the light-dark adjustment in a copier.
Increasing the sensitivity and contrast makes the picture darker by making borderline pixels black or increasing the gray-level of pixels. Or it may be more complex, with capabilities built in the compression boards. These capabilities might include the following: i Image calibration- the overall image density is calibrated, and the image pixels are adjusted to a predefined level.
Automatic hue intensity adjustment brings the hue intensity within predefined ranges. The hardware used can detect and adjust the range of color separation. Image Animation: Computer-created or scanned images can be displayed sequentially at controlled display speeds provide image animation that simulates real processes. Image animation is a technology that was developed by Walt Disney and brought into every home in the form of cartoons. The basic concept of displaying the successive images at short intervals to give the perception of motion is being used successfully in designing moving parts such as automobile engines.
Image Annotation: Image Annotation can be performed in one of two ways: as a text file stored along with the image or as a small image stored with the original image. OCR technology, used for data entry by scanning typed or printed words in a form, has been in use for quite some time. That is, the full- motion video should be indexed. These systems were the first alternative to paper-based inter office memos. This increased capability has tremendous potential to change the interaction among mail-enabled workers who can exchange information much more rapidly even when they are widely distributed geographically.
The availability of other technologies, such as audio compression and decompression and full- motion video, has opened new ways in which electronic mail can be used. What used to be a text document has given way in stages to a complex rich-text document with attachments and, more recently, to a very complex hypermedia document. With this capability in mind, electronic messaging is changing from being a communication medium to a workgroup application.
The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. The most usual method is by digital photography with a digital camera. Methods Digital photograph may be created directly from a physical scene by a camera or similar device. Alternatively, a digital image may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device.
The digitalization of analog real-world data is known as digitizing, and involves sampling and quantization. Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case the name image synthesis is more appropriate, and it is more often known as rendering. Previously digital imaging depended on chemical and mechanical processes, now all these processes have converted to electronic.
A few things need to take place for digital imaging to occur, the light energy converts to electrical energy- think of a grid with millions of little solar cells. Each condition generates a specific electrical charge. Charges for each of these "solar cells" are transported and communicated to the firmware to be interpreted. The firmware is what understands and translates the color and other light qualities.
Pixels are what is noticed next, with varying intensities they create and cause different colors, creating a picture or image. Finally the firmware records the information for future and further reproduction. Advantages There are several benefits of digital imaging. Digital imaging will reduce the need for physical contact with original images. Furthermore, digital imaging creates the possibility of reconstructing the visual contents of partially damaged photographs, thus eliminating the potential that the original would be modified or destroyed.
We are able to take cameras with us wherever as well as send photos instantly to others. It is easy for people to us as well as help in the process of self-identification for the younger generation. It also includes the digital imaging process. We listed text, graphics, images, fractals, audio, and video as the components that can be found in multimedia systems. Any multimedia design address how each of components will be handled. These must be addressed by applications such as, document imaging, image processing, full motion digital video applications, electronic messaging.
We also addressed about the multimedia in learning-using of multimedia systems in different areas such as, relative industries, commercial uses, entertainment and fine arts, journalism, engineering, industry, mathematical and scientific research, medicine, document imaging. Here, have also discussed on the methods for creating digital images and the benefits of digital imaging. Define multimedia. List the multimedia elements. Explain any two elements of multimedia system.
Discuss the applications of multimedia system. What are the advantages of multimedia system? Discuss briefly about Digital Imaging. Prabhat K. Multimedia once meant slide projector and a tape recorder being played simultaneously. The technology for joining individual media did not exist at that time.
Today, the term multimedia is associated almost exclusively with the computer and components that make up a multimedia program are digital. Various media are brought together to perform in unison on the computer as a single entity, and they are programmed or scripted using authoring software or programming languages. Hypermedia: Hypermedia is the use of text, data, graphics, audio and video as elements of an extended hypertext system in which all elements are linked, where the content is accessible via hyperlinks. Text, audio, graphics, and video are interconnected to each other creating a compilation of information that is generally considered as non-linear system.
The modern world wide web is the best example for the hypermedia, where the content is most of the time interactive hence non-linear. Hypertext is a subset of hypermedia, and the term was first used by Ted Nelson in Hypermedia content can be developed using specified software such as Adobe Flash, Adobe Director and Macromedia Author ware. Some business software as Adobe Acrobat and Microsoft Office Suite offers limited hypermedia features with hyperlinks embedded in the document itself. Interactive multimedia: Any computer-delivered electronic system that allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics, and animation.
Interactive multimedia integrate computer, memory storage, digital binary data, telephone, television, and other information technologies. Their most common applications include training programs, video games, electronic encyclopedias, and travel guides. Whereas we may think of a book as a linear medium, basically meant to be read from beginning to end, a hypertext system is meant to be read nonlinearly, by following links that point to other parts of the document, or indeed to other documents.
Hypermedia is not constrained to be text-based. It can include other media, such as graphics, images, and especially the continuous media-sound and video. Apparently Ted Nelson was also first to use to this term. As we have seen, multimedia fundamentally means that computer information can be represented through audio, graphics, images, video, and animation in additional media text and graphics. Hypermedia can be considered as one particular multimedia application. Examples of typical multimedia application include: digital video editing and production systems; electronic newspapers and magazines; the WWW; online reference works, such as encyclopedias; games; groupware; home shopping; interactive TV; multimedia courseware; video conferencing; video-on-demand; and interactive movies.
Smart phones. Its popularity is due to the amount of information available from web servers, the capacity to post such information, and ease of navigating such information with a web browser. The W3C has listed the following three goals of WWW; universal access of web resources, effectiveness of navigating available information, and responsible use of posted material. Important aspects are psychoacoustics, music, the MIDI standard, and speech synthesis and analysis. Sound Sound is a physical phenomenon caused by vibration of a material, such as violin string or wood log.
This type of vibration triggers pressure wave fluctuations in the air around the material. The pattern of this oscillation is called wave form. When hear a sound when such a wave reaches our ears. The greater the distance, the lower the sound. Similarly, the frequency represents the number of periods per second and is measured in hertz Hz or cycles per second cps. Amplitude A sound has a property called amplitude, which humans perceive subjectively as loudness or volume. The amplitude of a sound is a measuring unit used to derive the pressure wave from its mean value. Sound perception and psychoacoustics: Psychoacoustics is a discipline that studies the relationship between acoustic waves at the auditory ossicle and spatial recognition of the auditor.
Two main perspectives: i The physical acoustic perspective. Digital Audio All multimedia file formats are capable, by definition, of storing sound information. Sound data, like graphics and video data, has its own special requirements when it is being read, written, interpreted, and compressed. Before looking at how sound is stored in a multimedia format we must look at how sound itself is stored as digital data. All of the sounds that we hear occur in the form of analog signals. An analog audio recording system, such as a conventional tape recorder, captures the entire sound wave form and stores it in analog format on a medium such as magnetic tape.
Because computers are now digital devices it is necessary to store sound information in a digitized format that computers can readily use. A digital audio recording system does not record the entire wave form as analog systems do the exception being Digital Audio Tape [DAT] systems. Instead, a digital recorder captures a wave form at specific intervals, called the sampling rate. Each captured wave-form snapshot is converted to a binary integer value and is then stored on magnetic tape or disk.
PCM is a simple quantizing or digitizing audio to digital conversion algorithm, which linearly converts all analog signals to digital samples. Differential Pulse Code Modulation DPCM is an audio encoding scheme that quantizes the difference between samples rather than the samples themselves. Because the differences are easily represented by values smaller than those of the samples themselves, fewer bits may be used to encode the same sound for example, the difference between two bit samples may only be four bits in size.
For this reason, DPCM is also considered an audio compression scheme. DPCM is a non-adaptive algorithm. That is, it does not change the way it encodes data based on the content of the data. ADPCM, however, is an adaptive algorithm and changes its encoding scheme based on the data it is encoding. ADPCM specifically adapts by using fewer bits to represent lower-level signals than it does to represent higher-level signals.
Digital audio data is simply a binary representation of a sound. This data can be written to a binary file using an audio file format for permanent storage much in the same way bitmap data is preserved in an image file format. The data can be read by a software application, can be sent as data to a hardware device, and can even be stored as a CD-ROM. The quality of an audio sample is determined by comparing it to the original sound from which it was sampled.
The more identical the sample is to the original sound, the higher the quality of the sample. This is similar to comparing an image to the original document or photograph from which it was scanned. The larger the sampling size, the higher the quality of the sample. Just as the apparent quality resolution of an image is reduced by storing fewer bits of data per pixel, so is the quality of a digital audio recording reduced by storing fewer bits per sample.
Typical sampling sizes are 8 bits and 16 bits. The sampling rate is the number of times per second the analog wave form was read to collect data. The higher the sampling rate, the greater the quality of the audio. A high sampling rate collects more data per second than a lower sampling rate, therefore requiring more memory and disk space to store. Common sampling rates are Two-channel sampling provides greater quality than mono sampling and, as you might have guessed, produces twice as much data by doubling the number of samples captured. MIDI is not an audio format, however.
It does not store actual digitally sampled sounds. Instead, MIDI stores a description of sounds, in much the same way that a vector image format stores a description of an image and not image data itself. Sound in MIDI data is stored as a series of control messages. Each message describes a sound event using terms such as pitch, duration, and volume. When these control messages are sent to a MIDI-compatible device the MIDI standard also defines the interconnecting hardware used by MIDI devices and the communications protocol used to interchange the control information the information in the message is interpreted and reproduced by the device.
MIDI data may be compressed, just like any other binary data, and does not require special compression algorithms in the way that audio data does. Graphics are normally created in a graphics application and internally represented as an assemblage of objects such as lines, curves, or circles. Attributes such as style, width, and color define the appearance of graphics. The objects graphics are composed of can be individually deleted, added, moved and modified later.
In contrast, images can be from the real world or virtual and are not editable in the sense of above. While not all formats are cross-platform, there are conversion applications which will recognize and translate formats from other systems. For example, the table shows a list of file formats used in the popular Macromedia Director. These image points are termed pixels a contraction for picture element.
Instead a vector graphic file format is composed of analytical geometry formula representations for basic geometric shapes, e. A 1-bit image consists of on and off only and thus is the simplest type of image. Each pixel is stored as a single bit. Hence, such an image is also referred to as a binary image. It is also called a 1-bit monochrome image, since it contains no color. Each pixel is represented by a single byte, for example, a dark pixel might have a value of 10, and a bright one might be The entire image can be thought of as a 2-dimensional array of pixel values.
Notice that here we are using an aspect-ratio of Figure Representation of Bit-planes Each bit-plane can have a value of 0 or 1 at each pixel, but, together, all the bit-planes make up a single byte that stores values between 0 and Compression techniques can be classified into either lossless or lossy. An important point to note is that many bit color images are actually stored as bit images, with the extra byte of data for each pixel storing an alpha value representing special-effect information.
Many systems can make use of only 8 bits of color information in producing screen image. Even if a system has the electronics too actually use bit information, backward compatibility demands that we understand 8-bit color image files. Such image files use the concept of a lookup table to store color information. Basically, the image stores not color but instead just a set of bytes, each of which index into a table with 3-byte values that specify the color for a pixel with that lookup table index. Then, e. Figure 2. Each block of the color picker corresponds to one row color of the color LUT 2.
Basics The human eye is the human receptor for taking in still pictures and motion pictures. Its inherent properties determine, in conjunction with neuronal processing, some of the basic requirements underlying video systems. Representation of Video signals In conventional black-and-white television sets, the video signal is usually generated by means of a CRT. The representation of a video signal comprises three aspects; Visual representation, transmission, and Digitization.
In order to achieve this goal, the TV picture has to accurately convey the spatial and temporal content of the scene. The horizontal field of view can be determined using the aspect ratio. In a flat TV picture, a considerable portion of depth perception is derived from the Perspective appearance of the subject matter. Further, the choice of focal length of the camera lens and changes in depth focus influence depth perception.
In contrast to the continuous pressure waves of an acoustic signal, a discrete sequence of still images can be perceived as a continuous sequence. The impression of motion is generated by a rapid succession of barely differing still pictures frames. Between frames, the light is cut off briefly. Two conditions must visual reality met in order to represent a visual reality through motion picture. First, the rate of repetition of the images must be high enough to ensure continuity of movements from frame to frame.
Films recorded with 24 frames per second look strange when large objects close to the viewer move quickly. In order to encode color, consider the decomposition of a video signal into three sub-signals. For reasons of transmission, a video signal is comprised of a luminance signal and two chrominance signals. In NTSC and PAL systems, the component transfer of chrominance and luminance in a single channel is accomplished by specifying the chrominance carrier to be an odd multiple of half the line-scanning frequency.
TV practice uses YUV or similar color models because the U and V channels can be down sampled to reduce data volume without materially degrading image quality. This type of signal is called a composite video signal and is not really useful in high- quality computer video. Therefore, a standard composite video signal is usually separated into its basic components before it is digitized. Each system supports various resolutions and color representations. All of the animated sequences seen in educational programs, motion CAD renderings, and computer games are computer-animated computer-generated sequences.
When a large number of these cells are displayed in sequence and at a fast rate, the animated figures appear to the human eye to move. A computer-animated sequence works in exactly the same manner. A series of images is created of a subject; each image contains a slightly different perspective on the animated subject.
When these images are displayed played back in the proper sequence and at the proper speed frame rate , the subject appears to move. Computerized animation is actually a combination of both still and motion imaging. Each frame, or cell, of an animation is a still image that requires compression and storage. An animation file, however, must store the data for hundreds or thousands of animation frames and must also provide the information necessary to play back the frames using the proper display mode and frame rate.
Animation file formats are only capable of storing still images and not actual video information. It is possible, however, for most multimedia formats to contain animation information, because animation is actually a much easier type of data than video to store. The image-compression schemes used in animation files are also usually much simpler than most of those used in video compression. Most animation files use a delta compression scheme, which is a form of Run-Length Encoding RLE that stores and compresses only the information that is different between two images rather than compressing each image frame entirely.
RLE is relatively easy to decompress on the fly Storing animations using a multimedia format also produces the benefit of adding sound to the animation. Most animation formats cannot store sound directly in their files and must rely on storing the sound in a separate disk file which is read by the application that is playing back the animation. Animations are not only for entertaining kids and adults. Animated sequences are used by CAD programmers to rotate 3D objects so they can be observed from different perspectives; mathematical data collected by an aircraft or satellite may be rendered into an animated fly-by sequence.
Movie special effects benefit greatly by computer animation. It's such a complex entity that we take for granted everyday as people and designers. The truth is there is a lot of science and color theory history behind it. This article briefly details some of the rich and interesting history behind color. Color Theory A major portion of art and design either relies on or utilizes color in some way and, at a first glance, color seems really easy to wield.
But if you've tried serious coloring you might have realized that it difficult to get the colors to mesh or print correctly. This is because the way the eye perceives light as color and the way that substances combine to make color are different. Color theory is incredibly involved and has a lot different factors in a lot of different factors that make up color. Color theory has developed over time as different mediums such as pigments, inks, and other forms of media became more complex and easier to produce.
There are currently 3 sets of primary colors depending on what materials are being used. Color Understanding how colors are defined in graphics data is important to understanding graphics file formats. In this section, we touch on some of the many factors governing how colors are perceived. How We See Color? The eye has a finite number of color receptors that, taken together, respond to the full range of light frequencies about to nanometers. As a result, the eye theoretically supports only the perception of about 10, different colors simultaneously although, as we have mentioned, many more colors than this can be perceived, though not resolved simultaneously.
The eye is also biased to the kind of light it detects. It's most sensitive to green light, followed by red, and then blue. It's also the case that the visual perception system can sense contrasts between adjacent colors more easily than it can sense absolute color differences, particularly if those colors are physically separated in the object being viewed.
In addition, the ability to discern colors varies from person to person; it's been estimated that one out of every twelve people has some form of color blindness. The size of a pixel on a typical CRT display screen, for example, is less than a third of a millimeter in diameter. When a large number of pixels are packed together, each one a different color, the eye is unable to resolve where one pixel ends and the next one begins from a normal viewing distance. The brain, however, must do something to bridge the gap between two adjacent differently colored pixels and will integrate average, ignore the blur, or otherwise adapt to the situation.
For these reasons and others, the eye typically perceives many fewer colors than are physically displayed on the output device. How Colors Are Represented? Several different mathematical systems exist which are used to describe colors. This section describes briefly the color systems most commonly used in the graphics file formats. For purposes of discussion here, colors are always represented by numerical values. The most appropriate color system to use depends upon the type of data contained in the file.
For example, 1-bit, gray-scale, and color data might each best be stored using a different color model. Color systems used in graphics files are typically of the tri-chromatic colorimetric variety, otherwise known as primary 3-color systems. With such systems, a color is defined by specifying an ordered set of three values. Composite colors are created by mixing varying amounts of three, which results in the creation of a new color. Primary colors are those which cannot be created by mixing other colors. The totalities of colors that can be created by mixing primary colors make up the color space or color gamut.
Additive and subtractive color systems Color systems can be separated into two categories: additive color systems and subtractive color systems. Colors in additive systems are created by adding colors to black to create new colors. The more color that is added, the more the resulting color tends towards white. The presence of all the primary colors in sufficient amounts creates pure white, while the absence of all the primary colors creates pure black. Additive color environments are self-luminous. Color on monitors, for instance, is additive. Conceptually, primary colors are subtracted from white to create new colors.
The more color that is subtracted, the more the resulting color tends towards black. Thus, the presence of all the primary colors theoretically creates pure black, while the absence of all primary colors theoretically creates pure white. Another way of looking at this process is that black is the total absorption of all light by color pigments.
Subtractive environments are reflective in nature, and color is conveyed to us by reflecting light from an external source. Any color image reproduced on paper is an example of the use of a subtractive color system. No color system is perfect. As an example, in a subtractive color system the presence of all colors creates black, but in real-life printing the inks are not perfect. Mixing all ink colors usually produces a muddy brown rather than black.
The blacks we see on paper are only approximations of the mathematical ideal, and likewise for other colors. It is an additive system in which varying amounts of the colors red, green, and blue are added to black to produce new colors. Graphics files using the RGB color system represent each pixel as a color triplet-- three numerical values in the form R, G, B , each representing the amount of red, green, and blue in the pixel, respectively. For bit color, the triplet 0, 0, 0 normally represents black, and the triplet ,, represents white.
When the three RGB values are set to the same value-- for example, 63, 63, 63 or ,, , or ,, -the resulting color is a shade of gray. CMY Cyan-Magenta-Yellow CMY is a subtractive color system used by printers and photographers for the rendering of colors with ink or emulsion, normally on a white surface. It is used by most hard-copy devices that deposit color pigments on white paper, such as laser and ink-jet printers.
When illuminated, each of the three colors absorbs its complementary light color. By increasing the amount of yellow ink, for instance, the amount of blue in the image is decreased. As in all subtractive systems, we say that in the CMY system colors are subtracted from white light by pigments to create new colors. The new colors are the wavelengths of light reflected, rather than absorbed, by the CMY pigments. For example, when cyan and magenta are absorbed, the resulting color is yellow. The yellow pigment is said to "subtract" the cyan and magenta components from the reflected light.
When all of the CMY components are subtracted, or absorbed, the resulting color is black. Almost, whether it's possible to get a perfect black is debatable. Certainly, a good black color is not obtainable without expensive inks. To compensate for inexpensive and off-specification inks, the color black is tacked onto the color system and treated something like an independent primary color variable.
For this reason, use of the CMYK scheme is often called 4-color printing, or process color. In many systems, a dot of composite color is actually a grouping of four dots, each one of the CMYK colors. This can be readily seen with a magnifying lens by examining a color photograph reproduced in a glossy magazine. If expressed as a color triple, the individual color values are just the opposite of RGB.
For a bit pixel value, for example, the triplet ,, is black, and the triplet 0, 0, 0 is white. In most cases, however, CMYK is expressed as a series of four values. In many real-world color composition systems, the four CMYK color components are specified as percentages in the range of 0 to HSV Hue, Saturation, and Value HSV is one of many color systems that vary the degree of properties of colors to create new colors, rather than using a mixture of the colors themselves.
Hue specifies "color" in the common use of the term, such as red, orange, blue, and so on. Saturation also called chrome refers to the amount of white in a hue; a fully percent saturated hue contains no white and appears pure. By extension, a partly saturated hue appears lighter in color due to the admixture of white. Red hue with 50 percent saturation appears pink, for instance. A hue with high intensity is very bright, while a hue with low intensity is dark. HSV also called HSB for Hue, Saturation, and Brightness most closely resembles the color system used by painters and other artists, who create colors by adding white, black, and gray to pure pigments to create tints, shades, and tones.
A tint is a pure, fully saturated color combined with white, and a shade is a fully saturated color combined with black. A tone is a fully saturated color with both black and white gray added to it. If we relate HSV to this color mixing model, saturation is the amount of white, value is the amount of black, and hue is the color that the black and white are added to.
There are several other color systems that are similar to HSV in that they create color by altering hue with two other values. It is basically a linear transformation of RGB image data and is most widely used to encode color for use in television transmission. Note, however, that this transformation is almost always accompanied by a separate quantization operation, which introduces nonlinearities into the conversion.
Y specifies gray scale or luminance. The U and V components correspond to the chrominance color information. Black and white establish the extremes of the range, with black having minimum intensity, gray having intermediate intensity, and white having maximum intensity. One can say that the gamut of gray is just a specific slice of a color space, each of whose points contains an equal amount of the three primary colors, has no saturation, and varies only in intensity.
White, for convenience, is often treated in file format specifications as a primary color. Gray is usually treated the same as other composite colors. An 8-bit pixel value can represent different composite colors or different shades of gray. In bit RGB color, 12,12,12 , ,, , and ,, are all shades of gray. GIF87a was the original Web graphic file format. The current version, GIF89a, supports 1-bit jagged-edge transparency, comments, and simple animation. GIF is rarely a good choice for non-Web use. In June , ten different techniques for coding color and gray-scaled still images were presented.
An adaptive transformation coding technique based on the Discrete Cosine Transform DCT achieved the best subjective results. JPEG applies to color and gray-scaled still images. Video sequences can also be handled through a fast coding and decoding of still images, a technique is called Motion JPEG. Many technical advantages, such as…. For example, at one time the Macintosh used this format to store screenshot images.
Developed by the Aldus Corporation in the s, it was later supported by Microsoft. The most important tag is a format signifier: what type of compression etc. Electronic images and reports are transmitted digitally via PACS; this eliminates the need to manually file, retrieve, or transport film jackets. A PACS consists of four major components: the imaging modalities such as CT and MRI, a secured network for the transmission of patient information, workstations for interpreting and reviewing images, and archives for the storage and retrieval of images and reports.
Combined with available and emerging Web technology, PACS has the ability to deliver timely and efficient access to images, interpretations, and related data. PACS breaks down the physical and time barriers associated with traditional film-based image retrieval, distribution, and display. Digital copies are referred to as Soft-copy.
It enables practitioners in different physical locations to access the same information simultaneously for teleradiology. Hypermedia is the use of text, data, graphics, audio and video as elements of an extended hypertext system in which all elements are linked, where the content is accessible via hyperlinks, where as the interactive multimedia is allows the user to control, combine, and manipulate different types of media, such as text, sound, video, computer graphics, and animation. We also discussed in brief about different kind of media technology, such as Audio technology, Images and graphics, video technology and computer based animation.
In graphics and images we described a few characteristics of graphics and images. The video technology, it includes all about the representation of video signals. Explain the different types of multimedia. Differentiate between multimedia and hypermedia? Discuss briefly the different types of media technologies. Explain the different types of color models. Explain different types of web graphics. Discuss briefly about picture archiving and communication system. Donald Hearn and M. Pauline Baker, computer graphics, 3rd edition, Pearson. Hyper-spectral imaging, like other spectral imaging, collects and processes information from across the electromagnetic spectrum.
This means that the camera is able to scan the biochemical composition of crops, and deliver an overview of every constituent present. This is because, instead of simply recording an image of fields from above, the camera is capable of looking directly into the crops themselves. The type of image dictates which image format is best to use. Three variations of the GIF format are in use. The original specification, GIF87a, became a standard because of its many advantages over other formats. Creators of drawing programs quickly discovered how easy it was to write a program that decodes and displays GIF images.
GIF images are compressed to 20 to 25 percent of their original size with no loss in image quality using a compression algorithm called LZW. The next update to the format was the GIF89a specification. Unlike the original GIF specifications, which support only colors, the GIF24 update supports bit colors, which enable you to use more than 16 million colors. One drawback to use bit color image can be displayed on an 8-bit screen, it must be dithered, which requires processing time and may also distort the image.
This standard was developed by the Joint Photographic Experts Group. As you might have guessed, it works well for natural image types photography. Natural image types, like photography, have smooth variations of colors which mean the JPEG format also works well for images that contain gradients and varying tones and colors. Lossy compression is simply a form of encoding that discards loses some of its data.
The address of a pixel corresponds to its physical coordinates. FIG this example shows an image with a portion greatly enlarged, in which the individual pixels are rendered as small squares and can easily be seen. Each pixel is sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color image systems, a color is typically represented by three or four component intensities such as red, green, and blue or cyan, magenta, yellow, and black as discussed in previous unit.
A pixel is generally thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. We can distinguish the pixels in two ways; i. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. A 1 bpp image uses 1-bit for each pixel, so each pixel can be either on or off. Graphics in most old computer console, graphic calculator and mobile phone video games are mostly pixel art.
Image filters such as blurring or alpha-blending or tools with automatic anti-aliasing are considered not valid tools for pixel art, as such tools calculate new pixel values automatically, contrasting with the precise manual arrangement of pixels associated with pixel art. Line-arts are usually traced over scanned drawings and are often shared among other pixel artists. Other techniques, some resembling paintings, also exist. The limited palette often implemented in pixel art usually promotes dithering to achieve different shades and colors, but due to the nature of this form of art this is done completely by hand.
Hand-made anti-aliasing is also used. The JPEG format is avoided because its lossy compression algorithm is designed for smooth continuous-tone images and introduces visible artifacts in the presence of dithering. The Isometric kind is commonly seen in games to provide a three-dimensional view without using any real three-dimensional processing.
The Non-isometric pixel art is any pixel art that does not fall in the isometric category, such as views from the top, side, front, bottom views. These are also called Plano-metric views. This avoids blurring caused by other algorithms, such as bilinear and bi-cubic interpolation—which interpolate between adjacent pixels and work best on continuous tones, but not sharp edges or lines.
With the increasing use of 3D graphics in games, pixel art lost some of its use.
One such company that uses pixel art to advertise is Bell. The group eBay specializes in isometric pixel graphics for advertising. The limited number of colors and resolution presents a challenge when attempting to convey complicated concepts and ideas in an efficient way. On the Microsoft Windows desktop icons are raster images of various sizes, the smaller of which are not necessarily scaled from the larger ones and could be considered pixel art. Computers either have graphics capabilities already built in to do that, or you have to install a graphics card.
For computers that do not already have graphics capabilities, the graphics card is where that translation from binary code to image takes place. A graphics card receives information sent from the processor CPU using software applications. The processor is the "central processing unit" CPU or microprocessor. On most computers, the motherboard has sockets and slots where processors and the system's main memory are stored.
A motherboard has power connectors which receive and distribute electric power from the computer's power supply usually just an electric cord.
Most motherboards also have connectors for input devices for such things as a mouse or keyboard. The graphics card uses the motherboard to receive electric power and to receive data from the computer's processor. The graphics card uses the processor to decide what to do with a pixel, like what color it should be and where it should be placed in order to make an image on the screen. MEMORY: Whether data comes from permanent storage on a hard drive or from other input sources like keyboards, the data goes into random access memory RAM where the processor retrieves it.
RAM is a temporary storage for data that allows the processor to access the data quickly. It would greatly slow down a computer if the processor had to access the hard drive for every piece of information it needed. The graphics card uses memory to hold information about each pixel its color and location on the screen and temporarily stores the final images.
It is the piece of the computer you are looking at right now to see this article. A graphics card uses a monitor so that you can see the final result. A graphics card is a printed circuit board, similar to the motherboard that is another component connected to the computer's motherboard.
The connection to the motherboard is how it is powered and how it communicates with the processor. Some graphics cards require more power than a motherboard can provide, so they also connect directly to a computer's power supply. The graphics card processor is called a graphics processing unit GPU. Computer graphics remains one of the most exciting and rapidly growing areas of modern technology. Computer-graphics methods are routinely applied in the design of most products, in training simulators, in motion pictures, in data analysis, in scientific studies, in medical procedures, and in numerous other applications.
A great variety of techniques and hardware devices are now in use or under development for these diverse application areas. The graphics can be categorized in two types: Raster graphics and Vector graphics. Raster images are stored in image files with varying formats. List of Raster and Vector graphics, Drawings generally involve making marks on a surface by applying pressure from a tool, or moving a tool across a surface.
Graphical drawing is an instrumental guided drawing. Line Art is usually monochromic, although lines may be of different colors. Illustration is a visual representation such as drawings,, painting, photograph or other work of art that stresses subject more than form. The aim of illustration is to decorate a story, poem or picture of textual information.
Graph is a type of information graphic that represents tabular numeric data. Charts are often used to make it easier to understand large quantities of data and the relationship between different parts of the data. Diagram is a simplified and structured visual representation of concepts, ideas, constructions, relations, and statistical data etc, used to visualize and clarify the topic. With update dynamics it is possible to change the shape, color or other properties of the objects being viewed.
Disadvantages 1. It is time consuming to make decisions, must be made in advance for layout, color, materials, etc.
Technical in nature - audience knowledge to interpret, or understand. Costly - depending on the medium used poster board, transfer letters, etc. These charts and graphs very useful in decision making. The interactive graphics supported by an animation software proved their in use production of animated movies and cartoon films. This allows user to create artistic pictures which express message and attract attentions. Such pictures are very useful in advertisements. Models of physical systems, physiological systems, population trends or equipment, such as color-coded diagram can help trainees to understand the operation of the system.
The GIF images are compressed to 20 to 25 percent of their original size with no loss in image quality using a compression algorithm called LZW. We also discussed about the Pixels, representation of pixels in digital images as well as the pixels on output devices. Pixel Art is a form of digital art, created through the use of raster graphics software, where images are edited on the pixel level.
This unit is also described about working of Multimedia chipsets in computer systems. Graphics are visual representations on some surfaces, such as, a wall, canvas, screen, paper, inform and so on. What is Airborne imaging? Explain briefly. What is pixel phone? Explain briefly about Pixel Art. What is graphics?
What are the advantages and disadvantages of graphics? Discuss the applications of computer graphics. Vector graphics are based on vectors also called paths, or strokes which lead through locations called control points. Each of these points has a definite position on the x and y axes of the work plan. Each point, as well, is a variety of database, including the location of the point in the work space and the direction of the vector which is what defines the direction of the track. Each track can be assigned a color, a shape, a thickness and also a fill.
This does not affect the size of the files in a substantial way because all information resides in the structure; it describes how to draw the vector. A very simple vector drawing might look like this. In a vector drawing, you create control points. The lines in a vector drawing are created by the software and join up the control points that the user has drawn. There are 4 control points in the drawing above 3 are little white squares; the last one is dark to indicate that it is being worked on. It is possible to rescale up a whole chunk of animation without the blockings you would get from doing this with bitmaps.
Painting and drawing programs continue to evolve; one common feature is that both type of program incorporate more and more elements of the other type; painting programs have more drawing features in them and drawing programs have more painting features. Some software can do a good job of transforming a given bitmap into a vector graphic, though there is always a loss of detail involved.
Especially of interest to the readers will be information about document handling and their standards, programming of multimedia applications, design of multimedia information at human computer interfaces, multimedia security challenges such as encryption and watermarking, multimedia in education, as well as multimedia applications to assist preparation, processing and application of multimedia content.
Together with more than 20 researchers, he is working towards his vision of "truly seamless multimedia communications". She is an expert in the area of multimedia systems and networks and focuses on quality of service management problems. Ralf Steinmetz worked for over nine years in industrial research and development of distributed multimedia systems and applications.
His thematic focus in research and teaching is on multimedia communications with his vision of real "seamless multimedia communications. Over the last ten years she has been working on various research problems in the area of Quality of Service provisioning for real-time multimedia processing and communication systems and published over papers in leading conferences and journals. Multimedia Applications. Ralf Steinmetz , Klara Nahrstedt. Database Systems. Java Media Framework. Digital Signatures. Steganographic Methods.
Documents Hypertext and Hypermedia.