Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

I need the answer for the Lab Writeup please... Lab 1-18 - Programming Tools Part V: Compiler options and gprof Material Covered man g++ man

I need the answer for the Lab Writeup please...

Lab 1-18 - Programming Tools Part V: Compiler options and gprof

Material Covered

man g++

man gprof

man time

Part I: Compiler Options

Compiler options allow you to affect the behavior of the compiler and what types of libraries it includes in the executable code. We have already covered several compiler options in previous lectures or in CMPS 221:

-o : Name your executable file

-c : Compile only, create a .o file

-g : Add debugging information into executable for use with gdb

There are many other compiler options that may be useful in your upper division courses. For example, you can include a new library in the compilation process with the option -l, where library name is taken from the filename lib.so in /lib or /usr/lib (or any other library path defined by the system administrator). For example, the option-lm would include the library from a file named libm.so on the library path. If the library file is not in the library path, the absolute path to the library file can be given with the option -L, such as -L/usr/X11/lib -lgraphics.

You can also tell the compiler to optimize your code while it is compiling it. This can result in faster executables, but should be used with caution as some optimizations can negatively affect the behavior of your code. Several compiler optimization levels are provided: -O, -O1, -O2, -O3. The number given specifies the level of optimization, with higher levels (such as -O3) having more optimization than lower levels (such as -O1). The options -O and -O1 are aliases for one another, both do first-level optimizations.

You can also use compiler options to specify a preprocessor macro on the command line. Say your code has sections within a preprocessor directive #ifdef DEBUG_ME. You could add #define DEBUG_ME to the top of the code to enable these debugging sections, or you could use the command line option to generate an executable with these sections. The option is -D, such as -DDEBUG_ME or -DMAX_SIZE=50.

You can also control the type of warnings the compiler generates when compiling your code. Table 10-2 lists several common types of warnings. Warnings are enabled with the option -W, such as -Wall to enable all common warnings or -Wreturn-type to warn you when you do not have a return statement in a function that returns a value. You can also tell the compiler to stop compiling on a warning by giving the option -pendantic-errors, otherwise it will just print the warning and continue compiling.

You can combine as many of these options as you wish on one command line. For example, to compile with debugging, first-level optimization, pendatic-errors, warnings on all and with libraries and macros defined, give the following command:

g++ -g -Wall -pendantic-errors -L/usr/X11/bin -lgraphics -DDEBUG_ME -O1 -o demo demo.cpp

This is why people commonly use the CFLAGS, DFLAGS and so on macros in their makefiles, to easily list all the desired compiler options at the top of the makefile, rather than on each individual compilation line.

There are other compiler options which may be useful in certain situations, such as telling the compiler to produce an executable for a specific type of hardware architecture (such as producing an executable only for Pentium processors). Look at the man page on g++ or gcc for these compiler options.

Part II: gprof and time

Often you wish to have information about how long it takes your executable to run or what functions are taking the most time for the executable. The time and gprof utilities are helpful for this sort of investigation.

The time utility will record how much time and how many resources were used when running your executable. There are two versions of the time utility, the older time utility which just gives you the real time, user time and system time used by your program and the newer time utility that allows you to pass options to time to tell it to output information about CPU utilization, I/O accesses and memory used by the program. Sleipnir has the later utility, as do most Linux systems. The syntax for time is:

time [timeOptions] [exeOptions]

such as time ./hw3. See the man page for a full list of output options that can be given as a timeOption.

The gprof utility allows you to profile your executable, which generates statistics such as what percentage of execution time is spent in each function. This is useful to determine which functions are taking the bulk of the execution time, particularly when you are trying to optimize the code to run faster. To enable profiling with gprof, you must use the compiler option -pgwhen creating the executable. Then you must run the executable so that statistical information can be gathered. This is stored in a file called gmon.out. Then you can invoke the gprof utility with the following command:

gprof >

It is strongly recommended you redirect the output of gprof into an output file as gprof can produce quite a bit of information. The output will tell you how much time is spent in each function. There are two main portions to the output file, the flat profile and the call graph.

A flat profile appears similar to the following:

Flat profile: Each sample counts as 0.01 seconds.

% time

cumulative seconds

self seconds

calls

self s/call

total s/call

name

78.14

34.91

34.91

398453580

0.00

0.00

disable_edge

14.36

41.33

6.42

18175662

0.00

0.00

disable_node

5.64

43.85

2.52

100223

0.00

0.00

reset_graph

1.41

44.48

0.63

100000

0.00

0.00

fitness

0.20

44.57

0.09

5102

0.00

0.00

sort

0.18

44.65

0.08

911442

0.00

0.00

compare_chromosome

0.02

44.66

0.01

15525

0.00

0.00

create_node

0.02

44.67

0.01

4634

0.00

0.00

create_edge

0.02

44.68

0.01

200

0.00

0.00

population_save_chromosomes

0.00

44.68

0.00

2786934

0.00

0.00

compare_node

Notice that it is sorted by the percentage of time spent in each function. We can see that in this code, the majority of the time is spent in the two functions called disable_edge and disable_node.

The call graph resembles the following:

Call graph (explanation follows) granularity: each sample hit covers 4 byte(s) for 0.02% of 44.68 seconds

index %

time

self

children

called

name

[1]

100.0

0.00

44.68

main [1]

0.00

44.66

1/1

population_lifetime [2]

0.00

0.02

1/1

init_population [13]

-----------------------------------------------

0.00

44.66

1/1

main [1]

[2]

100.0

0.00

44.66

1

population_lifetime [2]

0.00

44.43

199/199

create_next_generation [4]

0.00

0.22

1/200

population_fitness [3]

0.00

0.00

11/11

print_chromosome_graph [17]

0.00

0.00

1/200

population_save_chromosomes [11]

0.00

0.00

4/5102

sort [10]

0.00

0.00

1/100223

reset_graph [9]

0.00

0.00

11/11

pretty_print_chromosome [36]

0.00

0.00

10/100013

empty_chromosome [21]

0.00

0.00

1/12

pretty_print_graph [35]

-----------------------------------------------

0.00

0.22

1/200

population_lifetime [2]

0.00

44.25

199/200

create_next_generation [4]

[3]

99.5

0.00

44.48

200

population_fitness [3]

0.63

41.33

100000/100000

fitness [5]

2.52

0.00

100200/100223

reset_graph [9]

-----------------------------------------------

The graph is divided into entries, each entry separated by dashes (-----) and numbered in the first column. In the final column, you will notice the name of functions with a number in square brackets behind it, such as population_lifetime [2]or population_fitness [3]. The number in square brackets indicates the entry that will further trace the call stack for that function. So in entry [1], we see that main is called and main calls population_lifetime (detailed in entry [2]) and init_population (detailed in entry [13] which I left off due to space considerations). We can then look at entry [2] and see that population_lifetime calls a sequence of functions, each of which have their own entries later on in the call graph output (again, this was trimmed for space considerations). For example, population_lifetime calls population_fitness, which is entry [3]. Then population_fitness calls fitness (entry [5]) and reset_graph (entry [9]) before returning back to population_lifetime.

The call graph output can be very verbose, but it gives you detailed information about the runtime stack (which functions are calling which functions) and how long each of the called functions took. Combined with the information in the flat profile, we can narrow in on the part of our code that is taking the most time and could use to be optimized.

Lab Writeup

Answer the following questions:

1. What is the compiler option to enable level 2 optimization?

2. What is the compiler option to define a macro called CAPACITY with a value of 100?

3. Give the compiler line which will create an executable called 'lab1-18' from a source code file called 'lab1-18.cpp' using return-type warnings and pendantic-errors.

4. Look at the man page for g++ and pick one architecture-related compiler option not listed in this lab. Name the option and describe what the man page says it does.

5. What is the purpose of the time utility?

6. How does the time utility differ from the gprof utility?

7. Look at the man page for time. What format token is used to tell time to output the average total memory use of the executable?

8. What compiler option must you use to enable profiling with the gprof utility?

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Principles Programming And Performance

Authors: Patrick O'Neil

1st Edition

1558603921, 978-1558603929

More Books

Students also viewed these Databases questions

Question

How many Tables Will Base HCMSs typically have? Why?

Answered: 1 week ago

Question

What is the process of normalization?

Answered: 1 week ago