Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #608

Open
wants to merge 17 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 30 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,57 +1,59 @@
hydrus [![Build Status](https://travis-ci.com/HTTP-APIs/hydrus.svg?branch=master)](https://travis-ci.com/HTTP-APIs/hydrus)
Hydrus [![Build Status](https://travis-ci.com/HTTP-APIs/hydrus.svg?branch=master)](https://travis-ci.com/HTTP-APIs/hydrus)
===================
hydrus is a set of **Python** based tools for easier and efficient creation of Hypermedia driven REST-APIs. hydrus utilises the power of [Linked Data](https://en.wikipedia.org/wiki/Linked_data) to create a powerful REST APIs to serve data.
hydrus uses the [Hydra(W3C)](http://www.hydra-cg.com/) standard for creation and documentation of it's APIs.
hydrus uses the [hydra(W3C)](http://www.hydra-cg.com/) standard for creation and documentation of it's APIs.

Start-up the demo
-----------------
* with *Docker* and *docker-compose* installed, run `docker-compose up --build`
* open the browser at `http://localhost:8080/api/vocab`
## Start-up the demo
- With [*Docker*](https://www.docker.com/) and [*docker-compose*](https://docs.docker.com/compose/) installed, run
``` bash
docker-compose up --build
```
- Open the browser at `http://localhost:8080/api/vocab`.

You should be displaying the example API as served by the server.

Add your own Hydra documentation file
-------------------------------------
To serve your own Hydra-RDF documentation file:
* create a `doc.py` file as the ones in `examples/` directory containing your own *ApiDoc*
* set the `APIDOC_REL_PATH` variable in `docker-compose.yml`. This should the relative path from the project root
* start-up the demo as above.
## Add your own hydra documentation file
To serve your own hydra-RDF documentation file:

1. Create a `doc.py` file as the ones in `examples/` directory containing your own *ApiDoc*.
2. Set the `APIDOC_REL_PATH` variable in `docker-compose.yml`. This should the relative path from the project root.
3. Start-up the demo as above.

You should be displaying your API as served by the server.

Table of contents
-------------
## Table of contents
* [Features](#features)
* [Requirements](#req)
* [Demo](#demo)
* [Usage](#usage)

<a name="features"></a>
Features
-------------
## Features
hydrus supports the following features:
- A client that can understand Hydra vocabulary and interacts with a Hydra supporting server to basic [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations on data.
- A client that can understand hydra vocabulary and interacts with a hydra supporting server to basic [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations on data.
- A generic server that can serve required data and metadata(in the form of API documentation) to a client over HTTP.
- A middleware that allows users to use the client to interact with the server using Natural Language which is processed machine consumable language. **(under development)**

<a name="req"></a>
Requirements
-------------
## Requirements
The system is built over the following standards and tools:
- [Python](https://www.python.org/downloads/) 3.6 and above
- [Flask](http://flask.pocoo.org/) a Python based micro-framework for handling server requests and responses.
- [JSON-LD](http://json-ld.org/spec/latest/json-ld/) as the preferred data format.
- [Hydra](http://www.hydra-cg.com/) as the API standard.
- [hydra](http://www.hydra-cg.com/) as the API standard.
- [SQLAlchemy](http://www.sqlalchemy.org/) as the backend database connector for storage and related operations.

Apart from this, there are also various Python packages that hydrus uses. Using `python setup.py install` installs all the required dependencies.

**NOTE:** You'll need to use `python3` not `python2`. Hydrus does not support python < 3.6
**NOTE:** You'll need to use `python3` not `python2`. hydrus does not support python < 3.6
To check your python version run
```bash
python --version
```

<a name="demo"></a>
Demo
-------------
## Demo
To run a demo for hydrus using the sample API, just do the following:

1. Clone hydrus:
```bash
git clone https://github.com/HTTP-APIs/hydrus
Expand All @@ -78,7 +80,8 @@ NOTE: there is an alternative way to install dependencies with `poetry`:
pip3 install poetry
poetry install
```
This is mostly used to check dependencies conflicts among packages and to release to `PyPi`.

This is mostly used to check dependencies conflicts among packages and to release to [`PyPi`](https://pypi.org/).

After installation is successful, to *run the server*:
```bash
Expand All @@ -88,14 +91,11 @@ hydrus serve
The demo should be up and running on `http://localhost:8080/serverapi/`.

<a name="usage"></a>
Usage
-------------
## Usage
For more info, head to the [Usage](http://www.hydraecosystem.org/01-Usage.html) section of the [wiki](http://www.hydraecosystem.org/).


Development
-------------

## Development
From the `hydrus` directory:
* To run formatter: `pip install black && black *.py`
* To test for formatting: `flake8 *.py`
28 changes: 15 additions & 13 deletions docs/STARTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,16 @@ This document is just a collection of annotations at the moment.

### Objectives
This project in basically aimed to solve these issues:
* provide an attractive demo API to show HYDRA specs, we decided to leverage space sciences as they provide a subject of great interest;
* define a multi-layered (and multi-peer?) approach to make the API usable and adaptable to different filtering/querying needs;
* make useful experiments about the right way to approach aggregation/filtering procedures among data served by multiple HYDRA-enhanced endpoints (starting from well-know industry standards for database management and data warehousing, and some glimpses to GraphQL).
* Provide an attractive demo API to show hydra specs, we decided to leverage space sciences as they provide a subject of great interest;
* Define a multi-layered (and multi-peer?) approach to make the API usable and adaptable to different filtering/querying needs;
* Make useful experiments about the right way to approach aggregation/filtering procedures among data served by multiple hydra-enhanced endpoints (starting from well-know industry standards for database management and data warehousing, and some glimpses to [GraphQL](https://graphql.org/)).


### Starting assumptions

We suppose the server has to be build with at least two layers of abstraction: a low-level strict-HYDRA-RDF API that can map as directly as possible the data to resources/collections ("Server"), and a middleware API ("Client") to perform some sort of opinionated intermediation between the lower level and the "final" client ("UI") above. The middleware has to implement data querying on the lower endpoints, leveraging HYDRA's metadata framework; to do this it has in some way to implement precise choices about the aggregation mechanism that is going to fetch and "map-reduce" (aggregate) the data from the lower level (this part is really open for discussions, as a starting example you can check Mongo's pipelines and its matching/grouping system); because of the different patterns that makes possible this kind of results, some choices about tools and techniques have to be discussed and options defined.
#### We suppose the server has to be build with at least **two layers of abstraction**:

A **low-level strict-HYDRA-RDF API** that can map as directly as possible the data to resources/collections **("Server")**, and a middleware API **("Client")** to perform some sort of opinionated intermediation between the lower level and the **"final"** **client ("UI")** above. The middleware has to implement data querying on the lower endpoints, leveraging HYDRA's metadata framework. To do this it has in some way to implement precise choices about the aggregation mechanism that is going to fetch and "map-reduce" (aggregate) the data from the lower level (this part is really open for discussions, as a starting example you can check Mongo's pipelines and its matching/grouping system), because of the different patterns that makes possible this kind of results, some choices about tools and techniques have to be discussed and options defined.

### About the option of having endpoints that are type- (class-) based
As we discussed, the fun idea would be to have somenthing that can "mix" REST and RPC in some way, not technically but at least conceptually. This approach comes from the observation that the statefulness of a REST API can be in some sort (at a level we are trying to understand for the purpose of our implementation) related to the concept of invariability of the output of a pure-function. This would be practically accomplished by dedicating some endpoints to resources obviously but also to "operations" (functions that accepts some kind of paramethers and run some kind of reasoning on a data structure, we temporarly define these endpoints `hydra:Operation`s).
Expand All @@ -19,14 +21,14 @@ As we discussed, the fun idea would be to have somenthing that can "mix" REST an
What we mean with "aggregation" and "filtering" with HYDRA? Let's see a simple example.

As HYDRA is meant to let clients to interoperate automatically, we try here to subset the problem posing it on this shape: starting from an initial input from a human/user, how can different layers of HYDRA-featured APIs respond consistantly by querying the provided endpoints?
* "UI" layer: a user (or a machine from another network) is wishful to know "what is most distant from the Sun, Earth or Mars?"
* "Client" layer: the client knows that some endpoints are available at a lower level, and we suppose that it knows it has to look for some kind of length value. It looks for the endpoints that can help, we suppose it can understand the fact that it needs the `/api/planet/calculate_average_au` (look for the right operation to perform on the "planet" class, that is basically a documentation problem); so it pass the parameters (Earth and Mars) to it. This layer is commonly referred to as *middleware*; we temporarly define `/api/planet/calculate_average_au` as a `hydra:Operation` (an endpoint that performs come kind of aggregation on other endpoints, something more similar to an RPC call more than a REST call probably), all the `hydra:Operation`s referred to a `hydra:Class` type has to be listed in the class' definition;
* "Server" layer: the server endpoints are queried and they serve the required data to calculate and respond: "Mars!". This server is the one that works like a traditional HYDRA-enhanced API, with all the descriptive features provided by the spec.
* **"UI" layer:** a user (or a machine from another network) is wishful to know "what is most distant from the Sun, Earth or Mars?"
* **"Client" layer:** the client knows that some endpoints are available at a lower level, and we suppose that it knows it has to look for some kind of length value. It looks for the endpoints that can help, we suppose it can understand the fact that it needs the `/api/planet/calculate_average_au` (look for the right operation to perform on the "planet" class, that is basically a documentation problem); so it pass the parameters (Earth and Mars) to it. This layer is commonly referred to as *middleware*; we temporarly define `/api/planet/calculate_average_au` as a `hydra:Operation` (an endpoint that performs come kind of aggregation on other endpoints, something more similar to an RPC call more than a REST call probably), all the `hydra:Operation`s referred to a `hydra:Class` type has to be listed in the class' definition;
* **"Server" layer:** the server endpoints are queried and they serve the required data to calculate and respond: "Mars!". This server is the one that works like a traditional HYDRA-enhanced API, with all the descriptive features provided by the spec.

In this scenario, allowed operations are defined by the possible interaction of different (or same-kind) classes (this idea is inspired by the Scala programming language's typing system). For example, two instances of a `Planet` (of the Solar System) class can have in common the defined "operation" of `calculate_average_au` (compute the average distance of two planets using the Sun-Earth distance as unit(AU)). This way in the API we could do: `/api/planet/calculate_average_au` and pass to it the related parameters.

This kind and body of the request:
```
```json
POST /api/planet/calculate_average_au
{
@type: "hydra:Collection",
Expand All @@ -37,7 +39,7 @@ POST /api/planet/calculate_average_au
}
```
This endpoint could respond with:
```
```json
{
@type: "vocab:Result", # or the given type in the HYDRA framework
@returns: "umbel:Float",
Expand All @@ -46,7 +48,7 @@ This endpoint could respond with:
}
```
This RPC-like endpoint could be meta-described as:
```
```json
{
"@context": {
"hydra": "http://www.w3.org/ns/hydra/context.jsonld"
Expand All @@ -69,9 +71,9 @@ This RPC-like endpoint could be meta-described as:


### Stack
* initial version: local Flask ("Server") and in-memory low-footprint actors ("Client") reading from local vocabularies, use ZeroMQ;
* development version: use a cache (Mongo or Redis) for documents and try to represent a graph;
* stable version: add a some kind of graph database under the cache layer;
* **Initial version:** local Flask ("Server") and in-memory low-footprint actors ("Client") reading from local vocabularies, use ZeroMQ;
* **Development version:** use a cache (Mongo or Redis) for documents and try to represent a graph;
* **Stable version:** add a some kind of graph database under the cache layer;
* ...

### Implementation
Expand Down
6 changes: 3 additions & 3 deletions docs/TASKS.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

Establish a protocol for building routes in a way that can fit the HYDRA spec:
# Establish a protocol for building routes in a way that can fit the HYDRA spec:

* Every route (see `server.routes.py`) has to be documented with a `hydra:ApiDocumentation` (automatically): it can be used the Werkzeug `Rule.endpoints` property to name each route (within the `add_url_rule` method) and map to an apidoc response;
* Every route (see `server.routes.py`) has to be documented with a `hydra:ApiDocumentation` (automatically): it can be used the Werkzeug `Rule.endpoints` property to name each route (within the `add_url_rule` method) and map to an apidoc response;.
* Rejoin all the different vocabularies in just one big vocabulary and make the server filter by argument if necessary (example vocabulary are "astronomy" and "solarsystem");
* Routes, outside the basic collections, has to be opinionated considering the concern of filtering and data creation required by the instantiation of the objects based on the provided classes (*or do it in a separate layer above*); in this case we would have a "smart middleware" that relies on a lower server layer.
* Start with the lower layer, that should be meant to provide resource one-by-one or by collections
* Start with the lower layer, that should be meant to provide resource one-by-one or by collections.
40 changes: 20 additions & 20 deletions docs/TUTORIALS.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,35 @@
# hydrus Workflow
# Hydra Workflow

This document can be used to understand how hydrus works. <br>
This document can be used to understand how hydra works. <br>
To understand the concepts related to hydrus:<br>
https://www.hydraecosystem.org/Starting-Material

### How is an ```ApiDoc``` defined:
- There are two ways of defining an ApiDoc:
1. Writing a Hydra-compliant JSON-LD like [here]( https://github.com/HTTP-APIs/hydrus/blob/master/hydrus/samples/doc_writer_sample_output.py)
1. Defining the ApiDoc programmatically using the code like in [here](https://github.com/HTTP-APIs/hydrus/blob/master/hydrus/samples/doc_writer_sample.py)
1. Writing a Hydra-compliant JSON-LD like [here]( https://github.com/HTTP-APIs/hydrus/blob/master/hydrus/samples/doc_writer_sample_output.py).
2. Defining the ApiDoc programmatically using the code like in [here](https://github.com/HTTP-APIs/hydrus/blob/master/hydrus/samples/doc_writer_sample.py).
- Any valid ApiDoc can be passed to the server so that it can parse the doc, generate a database and start the server.
- The document containing the context is generated by the doc_maker file in the hydra_python_core repo. hydrus provides the link in its response for the context.


### Points :
#### Description of some files:
1. hydrus/app.py
1. Used for logging. (Line 22)
1. Starts the SQLAlchemy engine and creates a session (Line 25-26)
1. Creates an Api Documentation using the doc_maker() of hydra_python_core
1. Gets the parsed classes from hydrus/data/doc_parse.py
1. Creates database tables and its metadata Line 34-36
1. Checks if Authentication is True, adds user to the session.Check: https://www.hydraecosystem.org/Auth
1. Calls app_factory class' constructor
1.Sets authentication,api_name,session etc. which can be later retrieved using the corresponding get functions.
1. Runs the API on the Port mentioned.
1. hydrus/app_factory.py
1. Used to add resources to the API from resources.py.
1. [*hydrus/app.py*](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/app.py).
1. Used for logging. [Line 22](https://github.com/HTTP-APIs/hydrus/blob/c6a8587c7904afe64c31f74da1f9d4edc7139ed2/hydrus/app.py#L22).
2. Starts the SQLAlchemy engine and creates a session [Line 30](https://github.com/HTTP-APIs/hydrus/blob/c6a8587c7904afe64c31f74da1f9d4edc7139ed2/hydrus/app.py#L30).
3. Creates an Api Documentation using the [doc_maker() of hydra_python_core](https://hydra-python-core.readthedocs.io/_/downloads/en/develop/pdf/).
4. Gets the parsed classes from [hydrus/data/doc_parse.py](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/data/doc_parse.py).
5. Creates database tables and its metadata [Line 34-36](https://github.com/HTTP-APIs/hydrus/blob/c6a8587c7904afe64c31f74da1f9d4edc7139ed2/hydrus/data/doc_parse.py#L34).
6. Checks if Authentication is True, adds user to the session.[Check](https://www.hydraecosystem.org/Auth).
7. Calls app_factory class' constructor.
8. Sets authentication,api_name,session etc. which can be later retrieved using the corresponding get functions.
9. Runs the API on the Port mentioned.
1. [*hydrus/app_factory.py*](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/app_factory.py)
1. Used to add resources to the API from `resources.py`.
1. Configures properties of API.
1. Enables CORS
1. hydrus/resources.py
1. Enables CORS.
1. [*hydrus/resources.py*](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/resources.py)
1. Contains classes which are added as resources to the API.

1. hydrus/samples/hydra_doc_sample.py
1. This file contains the sample ApiDoc that is displayed when the demo for hydrus is run. It contains the JSON-LD document that is displayed.
1. [*hydrus/samples/hydra_doc_sample.py*](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/samples/hydra_doc_sample.py).
1. This file contains the sample ApiDoc that is displayed when the demo for hydrus is run. It contains the [JSON-LD](https://json-ld.org/) document that is displayed.
4 changes: 2 additions & 2 deletions docs/hydrus_database.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## hydrus
## Hydrus

`hydrus` is a core part of the HydraEcosystem. It is the server which powers Hydra-based API Docs.

Expand All @@ -21,4 +21,4 @@ The overview of the process of making a database from `apidoc` object is:
### Example ER Diagram
It might be easier to understand what we discussed above if we could see a ER diagram for a Hydra API Doc.
So, let us take the sample [Drone API Doc](https://github.com/HTTP-APIs/hydrus/blob/develop/hydrus/samples/hydra_doc_sample.py) as our example.
The ER diagram for the database generated for this Hydra doc can be found [here](er_diagram.pdf).
The [ER diagram](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model) for the database generated for this Hydra doc can be found [here](er_diagram.pdf).
Binary file modified docs/wiki/images/db_schema.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/wiki/images/hydra_dataflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/wiki/images/rdf_dataflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/wiki/images/use_case1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading