Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

SnapShop works with images using filters. The SnapShop code to load and display images is already working, as is one filter, which flips an image

SnapShop works with images using filters. The SnapShop code to load and display images is already working, as is one filter, which flips an image horizontally.

Each filter implements the Java interface Filter. This interface has a single method:

void filter(PixelImage theImage);

Within SnapShop, an image is handled using the class PixelImage. The PixelImage class supports manipulation of the image as a rectangular array of Pixel objects. PixelImage has the simple operations: getHeight, getWidth, getData, and setData. These are can be used by the filters, for example FlipHorizontalFilter.

Each pixel is composed of three components: red, green, and blue. These are fields in the Pixel class. Acceptable values for these components range from 0 to 255, inclusive. (That is, the Pixel class represents 24-bit color.) The higher the value, the brighter that component is.

// a black Pixel new Pixel(0, 0, 0) // a white Pixel new Pixel(255, 255, 255) // a red Pixel new Pixel(255, 0, 0) // a blue Pixel new Pixel(0, 0, 255)

New filters are added to SnapShop in the SnapShopConfiguration class, by calling the addFilter method of SnapShop. This method takes two parameters, a Filter object which will manipulate the PixelImage, and a String to display for the filter.

Objectives

  • Work with interfaces.
  • Manipulate data within an array.
  • Work within an existing framework.
  • Evaluate and document your work.

Instructions

Starting Point

Compile and run the application to see how it works. The application class is SnapShop. The SnapShop application handles common image file types, including: .jpg, .gif, .png.

Review the code for this application. Pay particular attention to the SnapShopConfiguration and FlipHorizontalFilter classes.

  • SnapShopConfiguration Use this class to control the appearance of the user interface. You can add filters to the SnapShop application, as well as set a default file name.
  • FlipHorizontalFilter This is an example of the implementation of a simple filter.

Simple Transformations

There are two simple transformations you shall add to SnapShop: flipping the image vertically and creating a gray-scale image.

Flip Vertical

Create a new class FlipVerticalFilter. This class shall implement the Filter interface. You can use FlipHorizontalFilter as a model. Make sure that the comments make sense.

Add the FlipVerticalFilter to SnapShop. You can do this by removing the comment marker on line 20 of SnapShopConfiguration and adding an appropriate argument to the method call.

You can test your implementation at this time.

Notes:

  • Assume that the image is rectangular, that is that all of the rows are the same length.
  • The flip horizontal and flip vertical transformations are their own inverses. That is, applying either of these filters twice will result in the original image.
  • You can reload the image in SnapShop by simply clicking the Load button.
  • For the purposes of testing, keep the files relatively small, say less than 50 K.
  • It is a good idea to give complete filenames. You can use a forward slash for indicating folders, even on Windows.

Gray Scale

The second simple transform is to convert the picture to gray scale. That means that all three color components (red, green, and blue) shall have the same value after the filter is applied. Let's call this value the gray value.

The standard gray scale algorithm sets the gray value to the sum of 30% of the red value, 59% of the green value, and 11% of the blue value. We will call this Gray Scale 1.

A commonly used approximation sets the gray value to 11/32 of the red value, 16/32 of the green value, and 5/32 of the blue value. This calculation is used because it is easy to implement using integer arithmetic. We will call this Gray Scale 2.

gray = (11 * red + 16 * green + 5 * blue) / 32

Both of these transforms can be implemented by directly modifying the array of pixels.

Rotational Transformations

These are conceptually quite straight-forward, as shown in the test cases below.

Rotate Clockwise and Rotate Counter-Clockwise

There is a major wrinkle with these two transforms. The implementation of PixelImage requires that when the setData method is called, the dimensions of the array of Pixels matches the original dimensions of the image. Therefore, these transformations can only be applied to square images. If the Pixel array does not have the same dimensions, the call to setData fails with an IllegalArgumentException.

The implementation of these filters needs to deal with the requirement for a square image. If the image is square, the filter should apply the rotational transform. If the image is not square, there is no need to do any processing; the filter can raise IllegalStateException. The SnapShop code is set up to handle this exception displaying a message dialog box, which will display the exception message.

Note: One of the extra-credit options is to update the PixelImage code to allow the rotation of non-square images. In this case, the rotational filters should not throw an IllegalStateException.

3x3 Transformations

The next set of transformations are a bit more complicated. They are: Gaussian, Laplacian, Unsharp Masking, and Edgy. In these transformations, the value of a pixel is calculated using the original pixel and its immediate neighbors. That is, a 3x3 grid of pixels is used to calculate the value for a pixel. The red, green, and blue values are handled separately; that is, the new red value is calculated from the old red values; the new green value, from the old green values; and the new blue value, from the old blue values.

The four transforms compute the new pixel values as a weighted average of the old values. The only difference between these transforms is the weights that are used. When you implement these transforms, consider refactoring rather than having completely separate calculations for each of the transforms. There are a couple ways to do this. Please feel free to discuss the options in the forum.

Here are the weights to use for the 3x3 transformations, with some additional notes.

Gaussian

1 2 1
2 4 2
1 2 1

The weights total 16. So, after computed the weighted sum, the total must be divided by 16 to bring the value back into the range, 0 - 255. The effect of this filter is to blur the image.

Laplacian

-1 -1 -1
-1 8 -1
-1 -1 -1

In this case, the sum of the weights is 0. There is no need to scale the sum. However, the values must remain within the range, 0 - 255. So, if the sum is less than 0, use 0 rather than the calculated value. Similarly, if the sum is greater than 255, use 255 rather than the calculated value. This transform detects edges and highlights them.

Unsharp Masking

-1 -2 -1
-2 28 -2
-1 -2 -1

This transform is created by multiplying the cell value by 32 and subtracting the Gaussian transform. The sum of the weights is 16, so the weighted sum must be scaled by 16 to bring it into range. As with the Laplacian transform, the scaled values must then be adjusted to insure that they are within the range, 0 - 255. This transform can lessen the blurring in an image.

Edgy

-1 -1 -1
-1 9 -1
-1 -1 -1

This is the Laplacian transform added to the original pixel. The sum of the weights is 1, so there is no need for scaling. However, the values must still be checked against the range, 0 - 255, and corrected if needed.

Notes:

  • These transformations are a bit more complicated because the value of each pixel depends on the value of its neighbors. That means that you cannot replace the original value of the pixel before its value is used to calculate the new value for all of its neighbors. The simplest solution is to use two separate pixels arrays.
  • As a simplification for the 3x3 transformations, only update the interior pixels. You will need to provide values for the pixels around the edges; copy those values from the original image.
  • An example of applying these 3x3 transformations is available under Lesson 2 HW example.

Test Cases (Sample Output)

Minimal Version

Flip Horizontal (given)

becomes

Flip Vertical

becomes

Gray Scale 1 (using 30%, 59%, 11%)

becomes

Gray Scale 2 (using 11/32, 16/32, 5/32)

becomes

Rotate Clockwise

becomes

Rotate Counter-Clockwise

becomes

Standard Version, additional filters

Gaussian

becomes

Laplacian

becomes

Unsharp Masking

becomes

Edgy

becomes

Challenge Version, additional filters

Rotate Clockwise

becomes

Rotate Counter-Clockwise

becomes

Report

Write a report in which you describe:

  • how did you go about starting this project?
  • what works and what doesnt?
  • the surprises or problems you encountered while implementing this application
  • the most important thing(s) you learned from this portion of the assignment
  • what you would do differently next time?

I expect a clear report that has some thought in it. It will be easiest if you take notes about the process as you work on the assignment, rather than waiting till the end to start the written report.

Minimal Version

For the minimal version of the assignment, implement the following filters:

  • flip vertical
  • gray scale 1 & 2
  • rotate clockwise (square images only)
  • rotate counter-clockwise (square images only)

For the minimal assignment, all five of these filters must run and produce correct output. In the minimal and standard versions, the two rotational filters shall raise an IllegalStateException when processing non-square images. However, this exception is handled and an appropriate error dialog should be displayed, as shown above.

Standard Version

For the standard version of the assignment, implement the following filters:

  • flip vertical
  • gray scale 1 & 2
  • Gaussian
  • Laplacian
  • unsharp masking
  • edgy
  • rotate clockwise (square images only)
  • rotate counter-clockwise (square images only)

For the standard assignment, all nine of these filters must run. The output of the filters shall be correct, as defined above. The 3x3 filters (Gaussian, Laplacian, unsharp masking, and edgy) should produce reasonable output. Correct output is where each color component value for each pixel is within 5 of the expected value. Reasonable output is where the processing of the pixels is attempted and some results in transformation of the image that shows tendency toward the expected values.

So, the standard assignment is the minimal assignment, plus: correct output from the 3x3 filters (Gaussian, Laplacian, unsharp masking, and edgy).

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Expert Oracle Database Architecture

Authors: Thomas Kyte, Darl Kuhn

3rd Edition

1430262990, 9781430262992

More Books

Students also viewed these Databases questions