Question
How do I implement any of these design patterns? See instructions and code base below. Hide Assignment Information Instructions Assignment 5: Structural and Behavioural Design
Hide Assignment Information
Instructions
Assignment 5: Structural and Behavioural Design Patterns - Lab (4%)
This assignment relates to the following Course Learning Requirements:
- CLR 1: Implement an objected-oriented program design incorporating the use of best practice design patterns using the JAVA programming language.
Objective of this Assignment:
Demonstrate the skills required to:
- Apply thru practical application the following design patterns/strategies: Adapter and Proxy.
- Apply thru practical application the following design patterns/strategies: Observer.
Instructions
To prepare you for this assignment, read the module 9 and 10 content and follow the embedded learning activities.
Read the following Scenario
- The first task of this assignment is to read the Code & Code Scenario. It is a continuation from the previous assignment.
- Once you have familiarized yourself with the scenario provided above, you must review the code from the previous iteration (provided) and proceed with code refactoring by applying Design Patterns in order to improve the general design.
The two pieces missing are:
- An interface for accessing the external service API. And finally, you connected the Workers to the API through a Proxy and/or Adapter.
- A Notifier acting on a callback that triggers an event when the closed-captioning process is completed, and the recording is updated.
PART I
Proxy Pattern
The current code includes two classes that represent the APIs for Google and AWS services, but only a subset of the methods implemented are required for application. Because we want to expose the application only, the methods that are really required are as follows:
- Implement a Proxy named GoogleSpeech2TextProxy for the Google API
- Implement a Proxy named AWSTranscribeProxy for the AWS API.
- Update the CCWorker class accordingly.
Adapter Pattern
We want to make use of the same interface (therefore, the same method) for accessing the two Proxies.
- Implement a CCGoogleAdapter and CCAWSAdapter that have the same method (although different logic) for triggering the closed-captioning process.
- Update the CCWorker class accordingly (keep the cc triggering using local methods).
PART II
Observer Pattern
We want to have notifications that can be sent to the Console to simulate an external Event Logging System that may be used for Analytics and Live Monitoring.
- Implement Observer pattern being the Recording class, the Observable Subject, and a new Notifier class, the Observer.
- The event should be triggered when the Recording is updated with the transcript (or result from the closed captioning process) and the message generated should include information from the recording itself.
- Update the Main class accordingly.
-----------------
Code
-----------------
package com.algonquin.loggy;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
/**
* @author jesus
*
*/
public class CCSpooler {
private final ExecutorService executor;
public CCSpooler() {
// creating a pool of 5 threads
this.executor = Executors.newFixedThreadPool(5);
}
public void enqueue(Recording recording) {
Runnable worker = new CCWorker(recording);
// calling execute method of ExecutorService
executor.execute(worker);
}
public void shutdown() {
executor.shutdown();
while (!executor.isTerminated()) {
}
System.out.println("Finished all threads");
}
}
package com.algonquin.loggy;
import com.aws.api.AWSTranscribeAPI;
import com.google.api.GoogleSpeech2TextAPI;
/**
* @author jesus
*
*/
public class CCWorker implements Runnable {
private final Recording recording;
// Constructor to assign a message when creating a new thread
public CCWorker(Recording recording) {
this.recording = recording;
}
@Override
/**
*
*/
public void run() {
System.out.println(
Thread.currentThread().getName() + " (Start closed captioning) recording = " + recording.getFileName());
// Trigger CC using the local methods.
triggerGoogleClosedCaptioning();
triggerAWSClosedCaptioning();
// Trigger CC using the adapter methods.
// TODO
System.out.println(Thread.currentThread().getName() + " (End closed captioning)");
}
private void triggerGoogleClosedCaptioning() {
String rawFile = recording.getMediaFileMock();
Long fileSize = recording.getFileSize();
GoogleSpeech2TextAPI api = new GoogleSpeech2TextAPI();
String ccFile = "";
System.out.println("Closed captioning " + rawFile + " will take " + fileSize + " milliseconds...");
try {
// Simulate the delay.
Thread.sleep(fileSize);
// MockUp transcript process.
String speechClient = api.instantiateClient();
String audioBytes = api.fileToMemory(rawFile);
String config = api.buildSyncRecognizeRequestConfig();
String audio = api.buildSyncRecognizeRequestAudio();
api.performSpeechRecognition(config, audio);
String transcript = api.getFirstTranscriptAlternative();
ccFile = transcript;
} catch (InterruptedException e) {
e.printStackTrace();
}
recording.setCcFileMock(ccFile);
System.out.println(ccFile + " processed using GoogleSpeech2TextAPI");
}
private void triggerAWSClosedCaptioning() {
String rawFile = recording.getMediaFileMock();
Long fileSize = recording.getFileSize();
AWSTranscribeAPI api = new AWSTranscribeAPI();
String ccFile = "";
System.out.println("Closed captioning " + rawFile + " will take " + fileSize + " milliseconds...");
try {
// Simulate the delay.
Thread.sleep(fileSize);
// MockUp transcript process.
String client = api.clientCreate();
String stream = api.getStreamFromFile(rawFile);
api.startStreamTranscription(client, stream);
String transcript = api.getResult();
api.clientClose(client);
ccFile = transcript;
} catch (InterruptedException e) {
e.printStackTrace();
}
recording.setCcFileMock(ccFile);
System.out.println(ccFile + " processed using AWSTranscribeAPI");
}
}
package com.algonquin.loggy;
import java.util.LinkedList;
import java.util.List;
import java.util.UUID;
/**
* @author jesus
*
*/
public class Main {
/**
* @param args
*/
public static void main(String[] args) {
int maxmockups = 1; // The number of mock-ups to be generated.
List recordings = new LinkedList();
// Set the mock-up recordings.
for (int i = 0; i < maxmockups; i++) {
String fileName = "recording-" + String.valueOf(i) + ".mp4";
Long fileSize = (long) (Math.random() * (1024L - 1L));
recordings.add(new Recording(UUID.randomUUID(), fileName, fileSize));
}
// Enqueue recordings for closed captioning.
CCSpooler spooler = new CCSpooler();
recordings.forEach((recording) -> {
spooler.enqueue(recording);
});
spooler.shutdown();
}
}
package com.algonquin.loggy;
import java.util.UUID;
/**
* @author jesus
*
*/
public class Recording {
private UUID uuid;
private String fileName;
private Long fileSize;
private String mediaFileMock;
private String ccFileMock;
/**
* @return the fileName
*/
public String getFileName() {
return fileName;
}
/**
* @param fileName the fileName to set
*/
public void setFileName(String fileName) {
this.fileName = fileName;
}
/**
* @return the fileSize
*/
public Long getFileSize() {
return fileSize;
}
/**
* @param fileSize the fileSize to set
*/
public void setFileSize(Long fileSize) {
this.fileSize = fileSize;
}
public String getMediaFileMock() {
return mediaFileMock;
}
public void setMediaFileMock(String mediaFileMock) {
this.mediaFileMock = mediaFileMock;
}
public String getCcFileMock() {
return ccFileMock;
}
public void setCcFileMock(String ccFileMock) {
this.ccFileMock = ccFileMock;
}
/**
* @param uuid
* @param fileName
* @param fileSize
*/
public Recording(UUID uuid, String fileName, Long fileSize) {
super();
this.uuid = uuid;
this.fileName = fileName;
this.fileSize = fileSize;
this.mediaFileMock = fileName;
}
}
package com.aws.api;
import java.util.ArrayList;
import java.util.List;
/* This is a very rough and ugly AWS Transcribe API pretender
* for academic purposes non related to API programming nor Speech2Text
* recognition or whatsoever.
*
* But if you are curious on how the transcripts are actually done using
* AWS API you can find real examples at:
* https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/example_code/transcribe
*/
public class AWSTranscribeAPI {
List transcript;
String fileName;
public AWSTranscribeAPI() {
transcript = new ArrayList();
}
public String getStreamFromFile(String audioFileName) {
this.fileName = audioFileName;
return "streamFromFile";
}
public void startStreamTranscription(String client, String stream) {
System.out.println(client + " is starting streaming " + stream);
transcript.add(" for " + this.fileName);
for (int i = 1; i <= 5; i++) {
transcript.add("line-" + i);
}
}
public String getResult() {
String transcriptString = "";
for (String s : transcript) {
transcriptString += s + "t";
}
return transcriptString;
}
public String clientCreate() {
return "client";
}
public void clientClose(String client) {
System.out.println("Closing " + client);
}
}
package com.google.api;
import java.util.ArrayList;
import java.util.List;
/* This is a very rough and ugly Google Speech2Text API pretender
* for academic purposes non related to API programming nor Speech2Text
* recognition or whatsoever.
*
* But if you are curious on how the transcripts are actually done using
* Google API you can see a real example at:
* https://cloud.google.com/speech-to-text/docs/libraries#client-libraries-resources-java
*/
public class GoogleSpeech2TextAPI {
List transcriptAlternatives;
String fileName;
public GoogleSpeech2TextAPI() {
transcriptAlternatives = new ArrayList();
}
public String instantiateClient() {
return "speechClient";
}
public String fileToMemory(String fileName) {
this.fileName = fileName;
return "audioBytes";
}
public String buildSyncRecognizeRequestConfig() {
return "config";
}
public String buildSyncRecognizeRequestAudio() {
return "audio";
}
public void performSpeechRecognition(String config, String audio) {
// Mock-up results.
System.out.println("Performing Speech Recognition based on " + config + " for " + audio);
transcriptAlternatives.add("");
transcriptAlternatives.add("");
}
public String getFirstTranscriptAlternative() {
for (String result : transcriptAlternatives) {
// Returns the first element.
System.out.println("Returning " + result);
return result + " for " + this.fileName;
}
return null;
}
public List getResultList() {
return transcriptAlternatives;
}
public String recognitionAudio( ) {
return null;
}
public void setRecognitionConfigParameters() {
System.out.println("Set parameters");
}
}
Step by Step Solution
3.49 Rating (146 Votes )
There are 3 Steps involved in it
Step: 1
To implement the Proxy and Adapter design patterns as per the instructions provided in the assignment you need to extend and refactor the existing code base Proxy Pattern Implementation 1 Create Proxy ...Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started