Functional overview
Export is done with a click on a button near the result table.
On Click of the export button, a csv file is generated and proposed to be downloaded. The export can be long and generate large files.
What is exported
- The data set corresponds to the ALL the artifacts that match the query (no pagination of results)
- Fields that are covered
- Always there fields
- project label
- tracker label
- last update date
- last update by
- submission date
- submitted by
- artifact id
- Semantics
- Fields "by duck typing" that are present in at least two selected trackers
- String
- Text
- Integer
- Float
- Dates (with or without time)
What is "export by duck typing" ?
As data will be exported from several projects, the content might not be consistent. In order to get a result that "mean" something, for "general fields" (not always there fields or semantics) the export will consider that 2 fields are the same if
- They have the same name (label might be different)
- They have the same type (int, float, etc).
All other fields will be ignored (that is to say, if one field is present in only one tracker it won't be exported).
WARNING: it's likely that big query with large number of results and/or large number of computed fields might not be able to generate the file is a "human acceptable" time. This story doesn't cover this use case and this should be treated in a dedicated story once we collect enough information how it behaves on "real" data sets.
Technical overview
Export is done with a dedicated CSV endpoint which is paginated.
How to build the result
- 1 query for Always There Fields + Sementics
- + N queries per "extra" field
- + M queries per artifact and per computed field
Will be done in later stories
- Exporting Single value list (eg. select box and radio button): see story #12523: Export single-value list fields to CSV
- Exporting Computed fields (not by default, it's an option that user must activate like a checkbox)