Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / ASSIGNMENT [Engineering] Senior Backend Engineer In this task, you are required to build 2 components of a micro service which obtains information on active containers travelling on sea using scrapers and a work-management system for handling failed scraping attempts

ASSIGNMENT [Engineering] Senior Backend Engineer In this task, you are required to build 2 components of a micro service which obtains information on active containers travelling on sea using scrapers and a work-management system for handling failed scraping attempts

Computer Science

ASSIGNMENT [Engineering]

Senior Backend Engineer

In this task, you are required to build 2 components of a micro service which obtains information on active containers travelling on sea using scrapers and a work-management system for handling failed scraping attempts.

Please use python 3.6 and follow best practices.

Create the Dockerfile for packaging your code into a docker image and the corresponding docker-compose.yml file for orchestrating the whole process including setting up the local database and the queue.

For evaluating the result, we expect the chosen components to be up and running by executing docker-compose up -d in the project directory so please make sure this is satisfied before submitting the code.

In ML/AI products the backbone is data, you should be able to demonstrate the data challenges you encountered and how you handled them in your code

Micro Service Components

Scraper (Scrapy)

Create a scraper using the scrapy framework. You must make use of the scrapy framework. You will be scraping https://www.msc.com/track-a-shipment for container numbers and bill of lading numbers.

As a stretch goal you can build a parsehub scraper for your own understanding, but this is not part of the assignment.

Prefect (Workflow Management)

Create a prefect flow amalgamated with the above scraper as a data pipeline task to obtain container and bill of lading information. (https://www.prefect.io/cloud/)

Store the scraped data in the postgres database.

Create a perfect schedule in the same flow to run the scraper every 5mins using the queue you have set up. Also pass arguments in @task in your flow for failed retries.

 

FastAPI (API Framework in Python)

Create a PostgreSQL database schema, and create a simple REST API

(https://fastapi.tiangolo.com/) to query the scraped data stored in the database.

Delivery Instructions

Use Git and share the private github repo after completion. Include a simple README for me to reproduce the project.

CheatSheet

Study the entries of each of the data points in the search results and try to understand the meaning of each.

Pay special attention to the following:

  • POL - Port of Loading (Start of the journey)
  • POD - Port of Discharge (Finish of the journey)
  • Arrival Date (ETA) - expected time of arrival as per the schedule given by the carrier for a container (please note this is not a prediction)

 

Basics

Container numbers & Bill of Lading numbers Tracking

  1. Hit the url https://www.msc.com/track-a-shipment (Use the search bar to search for the numbers given)
  2. Use Container Number (Each Container is shipped by a Carrier)
    1. TGBU5600894 - Active
    2. TRHU5131609 - Active
  3. Use Bill of Lading Number (Each of bill of lading is a collection of Container #’s)
    1. MEDUJ1656290 - Active
    2. MEDUJ1656241 - Active
    3. MEDUM5024051 - Expired Bill of Lading, but still should be scrapper

Purchase A New Answer

Custom new solution created by our subject matter experts

GET A QUOTE