Svc::DpCatalog Component
1 Introduction
DpCatalog
is an active F´ component that manages the downlink of generate data products. The data products were previously generated by components into a project-specific set of directories. Upon command, DpCatalog
will build a catalog of data existing products and start downloading them. They are downloaded based on a priority stored in the file and by generation time. DpCatalog
also has commands to reprioritize and delete data products.
NOTE: This release has an early prototype that will do the following:
- Reads the data product files from the specified directory.
- Puts them in priority sorted list.
- Sends requests to
Svc/FileDownlink
to downlink each file in the list.
It does not:
- Stop transmissions in progress
- Allow insertion of new products after the catalog is built
- Clear the catalog via command
- Send data product files in pieces. It only sends whole files, so if the downlink is interrupted, the whole file will have to be retransmitted.
Features and more extended unit testing will be added over time. Use at your own risk!
2 Requirements
Requirement | Description | Rationale | Verification Method |
---|---|---|---|
SVC-DPCAT-001 | DpCatalog shall read a set of directories and build a catalog of data products. |
DpCatalog needs to know at least one directory where data products reside |
Test |
SVC-DPCAT-002 | DpCatalog shall sort data products first based on the internally recorded priority where the highest priority is represented by the lowest number. |
DpCatalog needs to downlink highest priority items first |
Test |
SVC-DPCAT-003 | DpCatalog shall sort data products second based on the internally recorded time with oldest products as higher priority. |
DpCatalog needs to downlink oldest items first |
Test |
SVC-DPCAT-004 | DpCatalog shall sort data products third based on the internally recorded product ID with the lowest as higher priority. |
DpCatalog needs to resolve case where priority and time match |
Test |
SVC-DPCAT-005 | DpCatalog shall update the data product metadata once download is complete |
DpCatalog needs to track completion status to avoid duplicate downloads |
Test |
SVC-DPCAT-006 | DpCatalog shall implement a command and port to build the catalog. |
DpCatalog should consume the resources to build the catalog only when needed |
Test |
SVC-DPCAT-007 | DpCatalog shall implement a command and port to start downloading the catalog. |
DpCatalog downloads should be timed for when communications are ready, and not consume comm window time building |
Test |
SVC-DPCAT-008 | DpCatalog shall have filters for the download based on container ID, priority, and data size limit. |
DpCatalog downloads should be tunable for available bandwidth and container metadata |
Test |
SVC-DPCAT-009 | DpCatalog shall implement a way to insert newly-generated data products into the catalog after the catalog is built |
DpCatalog should notice new products and not require a rebuild of the catalog |
Test |
SVC-DPCAT-010 | DpCatalog shall implement commands to modify the priority of existing data products |
Operators made change the priority to lower or raise priorities due to troubleshooting, etc | Test |
SVC-DPCAT-011 | DpCatalog shall implement commands to delete data products |
Ground tools can use DpManager commands to automatically delete DPs | Test |
SVC-DPCAT-012 | DpCatalog shall implement a catalog data product that is a listing of existing data products and their metadata |
Allow operators to know what products exist | Test |
SVC-DPCAT-013 | DpCatalog shall have filters for the catalog data product for priority, container ID, and time range |
Allow operators to know what products exist | Test |
3 Design
3.1 Assumptions
The design of DpCatalog
assumes the following:
- A file system exists to store the data product files.
- The contents of the data product files match the data product specification.
- The file downlink will acknowledge completion of each file
3.3 Ports
3.3.1 Role Ports
These ports will be automatically connected in the topology to F Prime services.
Name | Role |
---|---|
cmdDisp | Receives commands |
CmdReg | Registers commands |
CmdStatus | Returns command status |
Log | Outputs events for ground |
LogText | Outputs events for console |
Time | Gets time for time tags |
Tlm | Outputs telemetry |
3.3.2 Component-Specific Ports
Name | Type | Kind | Purpose |
---|---|---|---|
pingIn | async input | Svc.Ping | Ping from Health |
pingOut | output | Svc.Ping | Ping response to Health |
fileOut | SendFileRequest | output | Send next file to downlink |
fileDone | SendFileComplete | async input | Last requested file is complete |
3.4 Constants
DpCatalog
can be statically configured with the following constants:
Constant | Purpose |
---|---|
DP_MAX_DIRECTORIES | Maximum directories that can be provided for DPs |
DP_MAX_FILES | Maximum number of files that can be tracked across directories |
These constants are located in DpCatalogCfg.hpp
in the config
directory.
3.5 Configuration
During initialization, the configuration function takes a set of parameters:
void configure(
Fw::FileNameString directories[DP_MAX_DIRECTORIES],
FwSizeType numDirs,
Fw::FileNameString& stateFile,
NATIVE_UINT_TYPE memId,
Fw::MemAllocator& allocator
);
Parameter | Purpose |
---|---|
directories |
A set of strings up to DP_MAX_DIRECTORIES that are directory names where DPs are written |
numDirs |
The number of supplied directories |
stateFile |
The location of the file tracking product downlink state |
memId |
The id of the RAM memory segment used to store catalog state. Not needed for heap allocation. |
allocator |
Memory allocator for RAM memory storage |
3.6 Commands
Command | Arguments | Description |
---|---|---|
BUILD_CATALOG |
none | Builds the in-RAM catalog by scanning the directories provided during initialization. Downlink state file will be read in to set downlink state for products |
START_XMIT_CATALOG |
Start transmitting the catalog to the ground in priority order | |
wait | Wait for the transmission to complete before sending command completion status. Used when a sequence wishes to wait for completion before issuing subsequent commands. | |
STOP_XMIT_CATALOG |
none | Stop existing catalog transmission. Will be completed when the current file is done transmitting. NOT IMPLEMENTED YET |
CLEAR_CATALOG |
none | Clears existing RAM catalog and resets downlink state. Should be followed by BUILD_CATALOG . Used for recovery if state file gets corrupted or out of sync with file system contents. NOT IMPLEMENTED YET |
Sequence of Commands
When the software is first started, the catalog data structure is empty. The catalog must be built before starting downlink. The BUILD_CATALOG
command was implemented separately from the START_XMIT_CATALOG
command to start downlinking so the software could build the catalog prior to a communication session and execute the downlink during communication. Downlink can be halted by issuing the STOP_XMIT_COMMAND
in the middle of the downlink. If for some reason the state in the state file, the contents of the tree and the data products get out of sync, a CLEAR_CATALOG
command can be issued. Then the BUILD_CATALOG
command can be invoked to rebuild the state based on the existing set of data product files only. This will caused downlinked state to be lost, so data products not deleted after downlink would be readded to the pending list of downlinks.
3.7 Algorithms
3.7.1 Data Product Sorting
Data Products are sorted based on the following metadata in the following order. 1. Data Product Priority - Data products are generated with a priority, where the lower the number the higher the priority. 2. Data Product Generation Time - If priorities are the same, the older data is prioritized over the newer. 3. Data Product ID - If priorities and time are the same (highly unlikely), then lower IDs are prioritized first.
3.7.2 Reading Files
The initialize()
function is provided an array of directories where data product files are generated by Svc/DpWriter
. When the BUILD_CATALOG
command is executed, the headers of the data product files are read and the metadata in their headers is processed and stored as a data structure for sorting. The file name is not stored to conserve memory.
3.7.2 Sorting Algorithm
The data products are sorted using an unbalanced, non-recursive binary tree algorithm based on the following description:
https://codestandard.net/articles/binary-tree-inorder-traversal/
3.7.2 Tree Traversal for Downlink
When data products are downlinked, the tree is traversed using a stack as described. As each node is visited, the file is downlinked and the node in the tree is marked with a completed status. In subsequent traversals, the node is passed over.
3.7.3 State File
When a data product is downlinked, it is marked in the node as completed, but the state is also written to a file so that downlinked state is preserved across restarts of the software. When the catalog is built, the state file is first read into a data structure in memory. Then, as