10 total deaths
10 total deaths
1400 infected
23 days earlier
A platform for managing tabular datasets
date | location | cases |
---|---|---|
Metadata
JSON
📄
README
Schema
date: datetime
location: string
cases: int
Â
Â
location | date | new_cases | total_cases | new_deaths | total_deaths |
---|---|---|---|---|---|
state | date | cases | deaths |
---|---|---|---|
Bundesland | Meldedatum | AnzahlFall | AnzahlTodesfall |
---|---|---|---|
{ "columnNames": {
"region": "location",
"date": "date",
"total-cases": "total_cases",
"new-cases": "new_cases",
"total-deaths": "total_deaths",
"new-deaths": "new_deaths"
}
}
{ "columnNames": {
"region": "state",
"date": "date",
"total-cases": "cases",
"total-deaths": "deaths",
}
}
{ "columnNames": {
"region": "Bundesland",
"date": "Meldedatum",
"total-cases": "AnzahlFall",
"total-deaths": "AnzahlTodesfall",
}
}
Our world in data Dataset
metadata.json
New York Times Dataset
metadata.json
RKI Dataset
metadata.json
One neat way: Github Actions
# This workflow runs as a cron job to download the current version of the New York Times
# covid 19 dataset for the US and publishes a new version of this dataset into edelweiss
# data
name: Update New York Times dataset
on:
schedule:
- cron: '15 15 * * *'
jobs:
test:
- name: run update
working-directory: data-import-scripts
env:
REFRESH_TOKEN: ${{ secrets.REFRESH_TOKEN }}
run: python new-york-times.py
These slides:
slides.com/danielbachler/covid19-edelweiss-data