Object space methods: In this method, various parts of objects are compared. After comparison visible, invisible or hardly visible surface is determined. These methods generally decide visible surface. In the wireframe model, these are used to determine a visible line. So these algorithms are line based instead of surface based. Method proceeds by determination of parts of an object whose view is obstructed by other object and draws these parts in the same color.
Image space methods: Here positions of various pixels are determined. It is used to locate the visible surface instead of a visible line. Each point is detected for its visibility. If a point is visible, then the pixel is on, otherwise off. So the object close to the viewer that is pierced by a projector through a pixel is determined. That pixel is drawn is appropriate color.
These methods are also called a Visible Surface Determination. The implementation of these methods on a computer requires a lot of processing time and processing power of the computer.
The image space method requires more computations. Each object is defined clearly. Visibility of each object surface is also determined.
Object Space | Image Space |
---|---|
1. Image space is object based. It concentrates on geometrical relation among objects in the scene. | 1. It is a pixel-based method. It is concerned with the final image, what is visible within each raster pixel. |
2. Here surface visibility is determined. | 2. Here line visibility or point visibility is determined. |
3. It is performed at the precision with which each object is defined, No resolution is considered. | 3. It is performed using the resolution of the display device. |
4. Calculations are not based on the resolution of the display so change of object can be easily adjusted. | 4. Calculations are resolution base, so the change is difficult to adjust. |
5. These were developed for vector graphics system. | 5. These are developed for raster devices. |
6. Object-based algorithms operate on continuous object data. | 6. These operate on object data. |
7. Vector display used for object method has large address space. | 7. Raster systems used for image space methods have limited address space. |
8. Object precision is used for application where speed is required. | 8. There are suitable for application where accuracy is required. |
9. It requires a lot of calculations if the image is to enlarge. | 9. Image can be enlarged without losing accuracy. |
10. If the number of objects in the scene increases, computation time also increases. | 10. In this method complexity increase with the complexity of visible parts. |
In both method sorting is used a depth comparison of individual lines, surfaces are objected to their distances from the view plane.
Considerations for selecting or designing hidden surface algorithms: Following three considerations are taken:
Sorting: All surfaces are sorted in two classes, i.e., visible and invisible. Pixels are colored accordingly. Several sorting algorithms are available i.e.
Different sorting algorithms are applied to different hidden surface algorithms. Sorting of objects is done using x and y, z co-ordinates. Mostly z coordinate is used for sorting. The efficiency of sorting algorithm affects the hidden surface removal algorithm. For sorting complex scenes or hundreds of polygons complex sorts are used, i.e., quick sort, tree sort, radix sort.
For simple objects selection, insertion, bubble sort is used.
It is used to take advantage of the constant value of the surface of the scene. It is based on how much regularity exists in the scene. When we moved from one polygon of one object to another polygon of same object color and shearing will remain unchanged.
1. Edge coherence: The visibility of edge changes when it crosses another edge or it also penetrates a visible edge.
2. Object coherence: Each object is considered separate from others. In object, coherence comparison is done using an object instead of edge or vertex. If A object is farther from object B, then there is no need to compare edges and faces.
3. Face coherence: In this faces or polygons which are generally small compared with the size of the image.
4. Area coherence: It is used to group of pixels cover by same visible face.
5. Depth coherence: Location of various polygons has separated a basis of depth. Depth of surface at one point is calculated, the depth of points on rest of the surface can often be determined by a simple difference equation.
6. Scan line coherence: The object is scanned using one scan line then using the second scan line. The intercept of the first line.
7. Frame coherence: It is used for animated objects. It is used when there is little change in image from one frame to another.
8. Implied edge coherence: If a face penetrates in another, line of intersection can be determined from two points of intersection.
Describe structured analysis and structured design.
Why Required of ISO 9001:2000 standard?
Explain the LOC, Function point and Feature point?
Implement Bubble Sort using C.