Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

### Question 1: --- -------------------------------------------------- --- Given the following table: sql CREATE TABLE `IMPUTATION` ( `ID` bigint (20) unsigned NOT NULL AUTO_INCREMENT, `Employee_code` bigint (20)

### Question 1: --- -------------------------------------------------- --- Given the following table:

sql CREATE TABLE `IMPUTATION` ( `ID` bigint (20) unsigned NOT NULL AUTO_INCREMENT, `Employee_code` bigint (20) unsigned DEFAULT NULL, `Activity` bigint (20) unsigned DEFAULT, `Hours` decimal (50,14) DEFAULT NULL, `Shared_id` varchar (128) NOT NULL, PRIMARY KEY (`ID`), KEY `Code_empleado_index` (` Code_empleado`), KEY `Activity_index` (` Activity`), KEY `Shared_id_index` (` Shared_id`), KEY `Emp_act_index` (` Employee_code`, `Activity`), KEY `Emp_act_qc_index` (` Employee_code`, `Activity`,` Fortnight`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; `` ''

In the hypothetical case of having an application that processes data related to employees. Taking into account the following:

- The app integrates with another external app that manages the allocation of hours of those employees. - Through an api receive the data of the imputations of the employees. - The imputation table contains the data of the hours that the employees in the different activities / projects / tasks of the company. - Massive data imports arrive through the api, both added and updates, and the systems do not share the same IDs (for the imputations) so when the integration was designed it was decided to use a code generated up to 128 characters that is unique and corresponds to the employee, date and exercise. ** This field would be Shared_id in the table above. ** - When data is received, the Shared_id and records that already exist are searched are updated while the rest are added.

With this in mind, one day it is discovered that there is an error in the code that is in charge of inserting / updating data. This error is causing there are duplicate records with the same Shared_id in the imputation table, even some cases where the number of hours does not match. The mistake is corrected at the code level but now we have to clean the imputation table. For this it is necessary to take into account that we must delete the duplicates leaving only the last record, which is supposed to contain the valid data.

Design a query that clears only the data that is wrong. Further, Do you think the table has a design error? If so, how would you improve it?

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Case Studies In Business Data Bases

Authors: James Bradley

1st Edition

0030141346, 978-0030141348

More Books

Students also viewed these Databases questions

Question

Define procedural justice. How does that relate to unions?

Answered: 1 week ago