Skip to content

Latest commit

 

History

History
133 lines (104 loc) · 7.69 KB

testing.md

File metadata and controls

133 lines (104 loc) · 7.69 KB

Creating Datalog Tests

Thank you for contributing to the DDlog project!

Follow these steps, detailed below, to create a DDlog test.

  1. Write a test DDlog program
  2. Create test workload
  3. Create a pull request

Writing a test program

Test files are located in the test/datalog_tests directory. The test script will automatically test all files with *.dl extension.

  1. Create a file named test/datalog_tests/<test>.dl, where <test> is the name of your test.
  2. Create your test program, including type declarations, relations, functions, and rules.
  3. Run stack test (or stack test --ta '-p <test>' to run an individual test only) anywhere in the source tree. The test script will automatically validate and compile the program. Fix any compilation errors. If you get a rust compilation error message starting with something like cargo test failed with exit code ExitFailure 101, this means that the datalog compiler produced invalid Rust code. Please submit a bug report. Note: compilation will take a minute or two the first time as cargo pulls and compiles Rust dependencies. Once compilation succeeds, it will create test/datalog_test/<test>.ast file and a golden test file test/datalog_test/<test>.ast.expected. Remove this file whenever you change the datalog program to re-generate it.

Example datalog program path.dl

The following example computes all paths in a graph as a transitive closure of the edge relation.

typedef node = string

input relation Edge(s: node, t: node)

relation Path(s1: node, s2: node)

Path(x, y) :- Edge(x,y).
Path(x, z) :- Path(x, w), Edge(w, z).

Creating test workload

Next we would like to check that the datalog program computes correct outputs.

  1. Create a file named test/datalog_tests/<test>.dat (the test script will automatically associate this file with the corresponding .dl file).
  2. Populate the file with a sequence of commands from the table below.
  3. Run stack test again. This should create a file named test/datalog_tests/<test>.dump containing the output of all dump and echo commands in the script and test/datalog_tests/<test>.dump.expected.
  4. Inspect the dump file; make sure that the output of the tool is correct. If it's not, please submit a bug report including .dl and .dat files.
  5. Make sure that you delete the .expected file whenever the datalog program or the test workload changed; otherwise the test will fail.

Command reference

Command Example Description
start; start a transaction
commit; commit current transaction
commit dump_changes; commit current transaction and dump all changes to output relations
rollback; rollback current transaction; reverting all changes
timestamp; print current time in ns since some unspecified epoch
dump; dump the content of all relations
dump <relation>; dump Rel1; dump the content of an individual relation
query_index <index>(<args>); query_index Edge_by_from(100); dump all values in an indexed relation with the given key
dump_index <index>; dump_index Edge_by_from; dump all values in an indexed relation
echo <text>; echo Hello world; copy arbitrary text to stdout
log_level <level>; log_level 100000; set maximum log level for messages output via log API; messages with higher priority will be dropped (see log.dl)
insert <record>; insert Rel1(1,true,"foo"); insert record to relation Rel1
insert Rel1(.arg2=true,.arg1=1, .arg3="foo"); as above, but uses named rather than positional arguments
insert Rel2(.x=10, .y=Constructor{"foo", true}); passing structured data by calling type constructor
insert Rel2(.x=10, .y=Constructor{.f1="foo", .f2=true}); type constructor arguments can also be passed by name
delete <record>; delete Rel1(1,true,"foo"); delete record from Rel1 (using argument syntax identical to insert)
delete_key <relation> <key>; delete Rel1 1; delete record by key; only valid for relations with primary key
modify <relation> <key> <- <record>; modify Rel1 1 <- Rel1{.f1 = 5}; modify record; <record> specifies just the fields to be modified; only valid for relations with primary key
comma-separated updates insert Foo(1), delete Bar("buzz"); a sequence of insert and delete commands can be applied in one update
profile print CPU and memory profile of the DDlog program
profile cpu "on"/"off" controls the recording of differential operator runtimes; set to "on" to enable the construction of the programs CPU profile (default: "off")
exit; terminates execution
# # comment ending at the end of line comment that ends at the end of the line

Note: Placing a semicolon after an insert or delete operation tells DDlog to apply the update instantly, without waiting for subsequent updates. Comma-separated updates are only applied once the full list of updates has been read (end of list is indicated by a semicolon). This is significanly more efficient and should always be the preferred option when the client wants to apply multiple updates that are known at the same time.

Example workload path.dat

start;

insert Edge("Palo Alto", "Palo Alto"),
insert Edge("Palo Alto", "Redwood City"),
insert Edge("Redwood City", "San Bruno");

commit;
dump;

This workload generates the following path.dump file:

Path:
Path{"Palo Alto","Redwood City"}
Path{"Palo Alto","Palo Alto"}
Path{"Palo Alto","San Bruno"}
Path{"Redwood City","San Bruno"}

Edge:
Edge{"Palo Alto","Redwood City"}
Edge{"Palo Alto","Palo Alto"}
Edge{"Redwood City","San Bruno"}

Creating a pull request.

Include the following files in your PR:

  1. <test>.dl
  2. <test>.ast.expected
  3. <test>.dat
  4. <test>.dump.expected