Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Task 1A: Unique Advertised Prefixes Over Time Using the data from cache files, measure the number of unique advertised prefixes over time. Each file is
"""Task 1A: Unique Advertised Prefixes Over Time Using the data from cache files, measure the number of unique advertised prefixes over time. Each file is an annual snapshot. Calculate the number of unique prefixes within each snapshot by completing the function unique_prefixes_by_snapshot()."""
# Task 1A: Unique Advertised Prefixes Over Time def unique_prefixes_by_snapshot(cache_files): """ Retrieve the number of unique IP prefixes from each of the input BGP data files. Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A list containing the number of unique IP prefixes for each input file. For example: [2, 5] """ return []
I tried this code as below
import pybgpstream def unique_prefixes_by_snapshot(cache_files): files = sorted(cache_files) myprefixes = [] for myfile in files: stream = pybgpstream.BGPStream(data_interface="singlefile") stream.set_data_interface_option("singlefile","rib-file",myfile) prefixes = [] for element in stream: prefixes.append(element.fields['prefix']) prefixes = list(set(prefixes)) myprefixes.append(len(prefixes)) return myprefixes
This code give me the result [328304, 352530, 400840, 456929, 502238, 551296, 604490, 665094], but the correct result should be [325909, 349070, 394951, 445511, 486335, 530663, 576998, 629248]
import pybgpstream # Task 1B: Unique Autonomous Systems Over Time """Using the data from the cache files, measure the number of unique Autonomous Systems over time. Each file is an annual snapshot. Calculate the number of unique ASes within each snapshot by completing the function unique_ases_by_snapshot(). Make sure that your function returns the data structure exactly as specified in bgpm.py. Note: Consider all paths in each snapshot. Here, we consider all AS that appear in the paths (not only the origin AS). You may encounter corner cases of paths with the following form: "25152 2914 18687 {7829,14265}". In this case, consider the AS in the brackets as a single AS. So, in this example, you will count 4 distinct ASes. """ def unique_ases_by_snapshot(cache_files): """ Retrieve the number of unique ASes from each of the input BGP data files. Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A list containing the number of unique ASes for each input file. For example: [2, 5] """ return [] # Task 1C: Top-10 Origin AS by Prefix Growth """ Using the data from the cache files, calculate the percentage growth in advertised prefixes for each AS over the entire timespan represented by the snapshots by completing the function top_10_ases_by_prefix_growth(). Make sure that your function returns the data structure exactly as specified in bgpm.py. Georgia Institute of Technology 8 CS6250 Computer Networks BGP Measurements Project Consider each origin AS separately and measure the growth of the total unique prefixes advertised by that AS over time. To compute this, for each origin AS: 1. Identify the first and the last snapshot where the origin AS appeared in the dataset. 2. Calculate the percentage increase of the advertised prefixes, using the first and the last snapshots. 3. Report the top 10 origin AS sorted smallest to largest according to this metric. Corner case: When calculating the prefixes originating from an origin AS, you may encounter paths of the following form: "25152 2914 18687 {7829,14265}". This is a corner case, and it should affect only a small number of prefixes. In this case, you consider the entire set of AS "{7829,14265}" as the origin AS. """ def top_10_ases_by_prefix_growth(cache_files): """ Compute the top 10 origin ASes ordered by percentage increase (smallest to largest) of advertised prefixes. Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A list of the top 10 origin ASes ordered by percentage increase (smallest to largest) of advertised prefixes AS numbers are represented as strings. For example: ["777", "1", "6"] corresponds to AS "777" as having the smallest percentage increase (of the top ten) and AS "6" having the highest percentage increase (of the top ten). """ return [] # Task 2: Routing Table Growth: AS-Path Length Evolution Over Time def shortest_path_by_origin_by_snapshot(cache_files): """ Compute the shortest AS path length for every origin AS from input BGP data files. Retrieves the shortest AS path length for every origin AS for every input file. Your code should return a dictionary where every key is a string representing an AS name and every value is a list of the shortest path lengths for that AS. Note: If a given AS is not present in an input file, the corresponding entry for that AS and file should be zero (0) Every list value in the dictionary should have the same length. Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A dictionary where every key is a string representing an AS name and every value is a list, containing one entry per file, of the shortest path lengths for that AS AS numbers are represented as strings. For example: {"455": [4, 2, 3], "533": [4, 1, 2]} corresponds to the AS "455" with the shortest path lengths 4, 2 and 3 and the AS "533" with the shortest path lengths 4, 1 and 2. """ return {} # Task 3: Announcement-Withdrawal Event Durations def aw_event_durations(cache_files): """ Identify Announcement and Withdrawal events and compute the duration of all explicit AW events in the input BGP data Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A dictionary where each key is a string representing the IPv4 address of a peer (peerIP) and each value is a dictionary with keys that are strings representing a prefix and values that are the list of explicit AW event durations (in seconds) for that peerIP and prefix pair. For example: {"127.0.0.1": {"12.13.14.0/24": [4.0, 1.0, 3.0]}} corresponds to the peerIP "127.0.0.1", the prefix "12.13.14.0/24" and event durations of 4.0, 1.0 and 3.0. """ return {} # Task 4: RTBH Event Durations def rtbh_event_durations(cache_files): """ Identify blackholing events and compute the duration of all RTBH events from the input BGP data Identify events where the IPv4 prefixes are tagged with at least one Remote Triggered Blackholing (RTBH) community. Args: cache_files: A chronologically sorted list of absolute (also called "fully qualified") path names Returns: A dictionary where each key is a string representing the IPv4 address of a peer (peerIP) and each value is a dictionary with keys that are strings representing a prefix and values that are the list of explicit RTBH event durations (in seconds) for that peerIP and prefix pair. For example: {"127.0.0.1": {"12.13.14.0/24": [4.0, 1.0, 3.0]}} corresponds to the peerIP "127.0.0.1", the prefix "12.13.14.0/24" and event durations of 4.0, 1.0 and 3.0. """ return {} if __name__ == '__main__': # do nothing pass
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Lets first address the discrepancy in the result you observed for the unique advertised prefixes over time Without knowing the exact content of your d...Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started