Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Simple Transformations There are two kinds of transformations that you are required to implement. The simple transformations can be implemented by replacing each Pixel in

image text in transcribed

Simple Transformations

There are two kinds of transformations that you are required to implement. The simple transformations can be implemented by replacing each Pixel in the existing image with the updated one. The more complex 3x3 transformations require creating a new array of Pixels with the transformed image, then updating the image instance variable to refer to the new array once it is completely initialized.

The first three transformations you should implement flip the image horizontally and vertically, and transform the image into a photographic negative of itself (that is, you should create a flipHorizontalFilter, flipVerticalFilter, and NegativeFilter class). We have implemented flipHorizontalFilter for you.

The first two require a simple rearrangement of the pixels that reverses the order of rows or columns in the image. The negate transformation is done by replacing each Pixel in the image with a new Pixel whose rgb values are calculated by subtracting the original rgb values from 255. These subtractions are done individually for each of the red, green, and blue colors.

These transformations can be performed by modifying the image array of Pixels directly. You should do these first to get a better idea of how the image is represented and what happens when you modify the Pixels. You should make every effort to get this far before the end of week 9. That will ensure that youve made good progress on this assignment, or at least know what you need to clear up in discussions during lecture.

3x3 Transformations

Once youve got the simple transformations working, you should implement this next set, which includes Gaussian blur, Laplacian, Unsharp Masking, and Edgy. All of these transformations are based on the following idea: each pixel in the transformed image is calculated from the values of the original pixel and its immediate neighbors, i.e., the 3x3 array of pixels centered on the old pixel whose new value we are trying to calculate. The new rgb values can be obtained by calculating a weighted average; the median, minimum, or maximum; or something else. As with the negate transformation, the calculations are carried out independently for each color, i.e., the new red value for a pixel is obtained from the old red values, and similarly for red and blue.

The four transformations you should implement all compute the new pixel values as a weighted average of the old ones. The only difference between them is the actual weights that are used. You should be able to add a single method inside class PixelImage to compute a new image using weighted averages, and call it from the methods for the specific transformations with appropriate weights as parameters. You should not need to repeat the code for calculating weighted averages four times, once in each transformation. The method you add to PixelImage to do the actual calculations can, of course, call additional new methods if it makes sense to break the calculation into smaller pieces.

Here are the weights for the 3x3 transformations you should implement.

Gaussian

1 2 1 2 4 2 1 2 1

After computing the weighted sum, the result must be divided by 16 to scale the numbers back down to the range 0 to 255. The effect is to blur the image.

Laplacian

-1 -1 -1 -1 8 -1 -1 -1 -1

The neighboring pixel values are subtracted from 8 times the center one, so no scaling is needed. However, you do need to check that the weighted average is between 0 and 255. If it is less than 0, replace the calculated value with 0 (i.e., the new value is the maximum of 0 and the calculated value). If it is greater than 255, then replace the calculated value with 255. This transformation detects and highlights edges.

Unsharp masking

-1 -2 -1 -2 28 -2 -1 -2 -1

This transformation is created by multiplying the center pixel and subtracting the Gaussian weighted average. The result must be divided by 16 to scale it back down to the range 0 to 255. As with the Laplacian transformation, check for negative weighted averages or weighted averages greater than 255 (and do the same thing as in the Laplacian case to fix it).

Edgy

-1 -1 -1 -1 9 -1 -1 -1 -1

This adds the Laplacian weighted average to the original pixel, which sharpens the edges in the image. It does not need scaling, but you need to watch for weighted averages less than 0 or greater than 255.

Notes:

The complication with these transformations is that the new value of each pixel depends on the neighboring ones, as well as itself. That means we cannot replace the original pixels with new values before the old values have been used to compute the new values of their neighbors. The simplest way to handle this is to create a new 2D Pixel array the same size as the old image, compute Pixels for the new image and store them in the new array, then change the image instance variable to refer to the new array once it is completed.

You should assume the image has at least three rows and columns and you do not need to worry about updating the first and last rows and columns. In other words, only update the interior pixels that have neighbors on all four sides. However, every position in the array of Pixels must have refer to a Pixel object; you can't just leave a position in the array uninitialized.

Debugging hint: From past experience, weve noticed that bugs in the implementation of these transformations tend to produce more spectacular visible effects with the Laplacian weights. You might start with this set of weights when testing your code for the 3x3 transformations.

Be sure that your monitor is set to thousands or millions of colors, which is normally the case on most modern PCs. If you set the monitor to such a high resolution that the color display is set to 256, the colors will be rendered only approximately and it will be hard to see the effects of most of these transformations.

I have to make an gaussian blur, Laplacian, unsharp marking.

Also, Negative code.

public class SnapShopConfiguration {

public static void configure(SnapShop theShop) { // set default directory theShop.setDefaultDirectory("./Images/");

theShop.addFilter(new FlipVerticalFilter(), "Flip Vertical"); theShop.addFilter(new FlipHorizontalFilter(), "Flip Horizontal"); theShop.addFilter(new DemosaicFilter(), "Demosaic"); theShop.addFilter(new DarkenFilter(), "Darken"); theShop.addFilter(new ShiftRightFilter(), "Shift Right"); theShop.addFilter(new EdgeFilter(), "Edge"); theShop.addFilter(new CrazyFilter(), "Crazy");

}

//creates a new SnapShop object

public static void main(String args[]) { SnapShop theShop = new SnapShop(); } } ------------------------------------------------------------------------ import java.awt.*; import java.awt.event.*; import java.awt.image.*; import javax.swing.*; import java.io.*; import javax.swing.JFileChooser;

public class SnapShop extends JFrame { FileLoader fl; FilterButtons fb; ImagePanel ip; RenderingDialog rd; BufferedImage originalImage; Filter digicam; PixelImage pixelImage; //pixel image corresponding to this image

public SnapShop() { super("SnapShop");

this.addWindowListener(new WindowAdapter () { public void windowClosing(WindowEvent e) { System.exit(0); } });

ip = new ImagePanel(this); this.getContentPane().add(ip, BorderLayout.CENTER);

fl = new FileLoader(this); this.getContentPane().add(fl, BorderLayout.NORTH);

fb = new FilterButtons(this); this.getContentPane().add(fb, BorderLayout.WEST);

rd = new RenderingDialog(this);

SnapShopConfiguration.configure(this);

// add the digital camera filter as part of // the standard SnapShop digicam=new DigitalCameraFilter(); addFilter(digicam, "Digital Camera Filter");

this.pack(); this.setVisible(true); }

//FileLoader

private class FileLoader extends JPanel implements ActionListener { private String filePath; private ImagePanel ip; private SnapShop s;

// construct a new FileLoader object

public FileLoader(SnapShop s) { super(new FlowLayout());

filePath = null; this.s = s; this.ip = s.getImagePanel();

JButton loadButton = new JButton(" Load New File "); loadButton.addActionListener(this); add(loadButton); }

public void actionPerformed(ActionEvent e) { JFileChooser chooser = new JFileChooser(filePath); int resultOfShow = chooser.showDialog(null, "Choose a .jpg image file"); try{ if (resultOfShow == JFileChooser.APPROVE_OPTION) { String fileName = chooser.getSelectedFile().getAbsolutePath(); ip.loadImage(fileName); } else { // user must have canceled throw new IOException(""); } } catch (Exception ex) { JOptionPane.showMessageDialog(s, "Could not load a file", "Error", JOptionPane.ERROR_MESSAGE); } }

public void setDefaultDirectory(String filePath) { this.filePath = filePath; } }

private class FilterButtons extends JPanel { private SnapShop s; private ImagePanel ip;

public FilterButtons(SnapShop s) { setLayout(new BoxLayout(this, BoxLayout.Y_AXIS)); this.s = s; this.ip = s.getImagePanel();; }

public void addFilter(Filter f, String description) { JButton filterButton = new JButton(description); filterButton.addActionListener(new FilterButtonListener(this, f)); add(filterButton); s.pack(); }

public void applyFilter(Filter f) { try { ip.applyFilter(f); } catch (Exception e) { e.printStackTrace(System.out); } }

private class FilterButtonListener implements ActionListener { private FilterButtons fb; private Filter f;

public FilterButtonListener(FilterButtons fb, Filter f) { this.fb = fb; this.f = f; }

public void actionPerformed(ActionEvent e) { fb.applyFilter(f); } } }

// Class representing the ImagePanel private class ImagePanel extends JPanel { // instance variables private BufferedImage bi; private SnapShop s; private int margin;

public ImagePanel(SnapShop s) { margin = 10; bi = null; this.s = s; }

public void loadImage(String filename) { Image img = Toolkit.getDefaultToolkit().getImage(filename); try { MediaTracker tracker = new MediaTracker(this); tracker.addImage(img, 0); tracker.waitForID(0); } catch (Exception e) {} int width = img.getWidth(this); int height = img.getHeight(this); bi = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); Graphics2D biContext = bi.createGraphics(); biContext.drawImage(img, 0, 0, null); setPreferredSize(new Dimension(2*bi.getWidth()+ margin, bi.getHeight())); revalidate(); s.pack(); s.repaint();

//set the original image to this image // and apply the digital camera filter originalImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); Graphics2D g = originalImage.createGraphics(); g.drawImage(img, 0, 0, null); pixelImage = new PixelImage(bi); fb.applyFilter(digicam); }

public void paint(Graphics g) { super.paint(g); if (bi != null) { g.drawImage(bi, 0, 0, this); g.drawImage(originalImage, bi.getWidth()+ margin, 0 , this); } }

public void applyFilter(Filter f) { if (bi == null) { return; } s.showWaitDialog(); f.filter(pixelImage.getData()); pixelImage.setData(); s.hideWaitDialog(); bi = pixelImage.getImage(); repaint(); } }

private class RenderingDialog extends JFrame {

public RenderingDialog(JFrame parent) { super("Please Wait"); Point p = parent.getLocation(); setLocation((int)p.getX() + 100, (int)p.getY() + 100); this.getContentPane().add(new JLabel("Applying filter, please wait..."), BorderLayout.CENTER); } }

public void addFilter(Filter f, String description) { fb.addFilter(f, description); }

protected void showWaitDialog() { rd.pack(); rd.setVisible(true); }

protected void hideWaitDialog() { rd.setVisible(false); }

protected ImagePanel getImagePanel() { return ip; }

public void setDefaultDirectory(String filepath) { fl.setDefaultDirectory(filepath); } }

----------------------------------------------------------------------------------------- import java.awt.image.BufferedImage; import java.awt.image.Raster; import java.awt.image.WritableRaster;

public class PixelImage { private BufferedImage myImage; private int width; private int height; private Pixel pixels[][];

public PixelImage(BufferedImage bi) { // initialise instance variables myImage = bi; width = bi.getWidth(); height = bi.getHeight(); pixels = new Pixel[height][width]; initializePixels(); }

public int getWidth() { return this.width; }

public int getHeight() { return this.height; }

public BufferedImage getImage() { return this.myImage; }

private void initializePixels() {

Raster r = this.myImage.getRaster(); int[] samples = new int[3];

for (int row = 0; row

public Pixel[][] getData() { return pixels; }

public void setData() throws IllegalArgumentException { int[] pixelValues = new int[3]; // a temporary array to hold r,g,b values WritableRaster wr = this.myImage.getRaster();

if (pixels.length != wr.getHeight()) { throw new IllegalArgumentException("Array size does not match"); } else { if (pixels[0].length != wr.getWidth()) { throw new IllegalArgumentException("Array size does not match"); } } for (int row = 0; row

private int red; private int green; private int blue; private int digCamColor;

public static final int ALL = 0; public static final int RED = 1; public static final int GREEN = 2; public static final int BLUE = 3;

public Pixel(int red, int green, int blue) { this.red = red; this.green = green; this.blue = blue; this.digCamColor = Pixel.ALL; // set to all colors }

public int getRed() { return red; }

public int getGreen() { return green; }

public int getBlue() { return blue; }

public int getDigCamColor() { return digCamColor; }

public int getComponentById(int id) { switch(id) { case RED: return red; case BLUE: return blue; case GREEN: return green; } return -1; // error value }

public void setRed(int intensity){ red = intensity; }

public void setGreen(int intensity){ green = intensity; }

public void setBlue(int intensity){ blue = intensity; }

public void setDigCamColor(int color){ digCamColor = color; }

public void setComponentById(int id, int intensity) { switch(id) { case RED: red = intensity; break; case BLUE: blue = intensity; break; case GREEN: green = intensity; break; } }

public void setAllColors(int rIntensity, int gIntensity, int bIntensity) { red = rIntensity; green = gIntensity; blue = bIntensity; }

public void keepSingleColor(int color){

if(color == RED) { green = 0; blue = 0; digCamColor = Pixel.RED; } if(color == GREEN) { red = 0; blue = 0; digCamColor = Pixel.GREEN; } if(color == BLUE) { red = 0; green = 0; digCamColor = Pixel.BLUE; } }

public String toString() { return "Pixel(red=" + red + ", green=" + green + ", blue=" + blue + ")"; } }

----------------------------------------------------------------------------------------- public interface Filter {

void filter(Pixel[][] theImage); }

CSC 142 - SnapShop Enter file name: Negative Flip Horizontal Flip Vertical Negative Gaussian blur Laplacian Unsharp masking Edgy Emboss Black & white CSC 142 - SnapShop Enter file name: Negative Flip Horizontal Flip Vertical Negative Gaussian blur Laplacian Unsharp masking Edgy Emboss Black & white

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

MFDBS 89 2nd Symposium On Mathematical Fundamentals Of Database Systems Visegrad Hungary June 26 30 1989 Proceedings

Authors: Janos Demetrovics ,Bernhard Thalheim

1989th Edition

3540512519, 978-3540512516

More Books

Students also viewed these Databases questions