Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup chunks should be available to solutions #202

Closed
matthew-brett opened this issue Nov 13, 2018 · 13 comments · May be fixed by datacamp/testwhat#233
Closed

Setup chunks should be available to solutions #202

matthew-brett opened this issue Nov 13, 2018 · 13 comments · May be fixed by datacamp/testwhat#233
Labels
answered? Has the question been answered? difficulty: advanced Best for maintainers to address effort: medium < 3 days of work priority: high Must be fixed before next release
Milestone

Comments

@matthew-brett
Copy link

We are writing a tutorial where the students are often, but not always using a data frame that should already be in the workspace. To do this, for the students' code, we have an exercise my-exercise that has exercise.setup="get-the-data", where get-the-data is a setup chunk that loads the data.

Of course the solution needs the same data, but it seems that adding exercise.setup="get-the-data" does not work as an option to my-exercise-solution.

I think the setup for the exercise should be available to the solution by default. At least, it should be possible to pass the setup to the solution code. Otherwise we have to replicate the setup in every solution chunk, which is bearable when the setup is minor, but very annoying when it becomes more complex.

See: alan-turing-institute/r-from-scratch#3 for discussion.

@schloerke
Copy link
Collaborator

schloerke commented Jul 30, 2019

In the development version of learnr, there is a function called duplicate_env. (See #176 .) Also, a new envir_prep argument has been added to the checker function. (See #175 .)

Would these two additions solve your issue?

@schloerke schloerke added the answered? Has the question been answered? label Jul 31, 2019
@matthew-brett
Copy link
Author

I'm sorry - I'm afraid I don't know the code well enough to see immediately how this would work. At the moment, I have questions like this:

```{r names, exercise=TRUE, exercise.setup="get-gender-data"}
# Display the names from the gender_data data frame

```

```{r names-hint}
You will need something starting with `names(` and ending with `)`. What goes inside the brackets?
```

```{r names-solution}
# Load in the data
gender_data <- read.csv('http://bit.ly/gender-stats-data')

# Display the names from the gender_data data frame
names(gender_data)
```

```{r names-check}
# Load in the data
gender_data <- read.csv('http://bit.ly/gender-stats-data')

ex() %>% check_function("names") %>% {
  check_arg(., "x") %>% check_equal()
}

state <- ex()
for (name in names(gender_data)) {
  check_output(state, name, missing_msg=paste('Expecting "', name, '" in ouput', sep=''))
}
```

How would the new features work, to allow me to share the setup between the exercise, the solution and the check?

Sorry if that is obvious.

@schloerke
Copy link
Collaborator

It's not obvious. I'll admit setup is confusing. Yes, in the current setup, I don't see a direct way to access the information.

  • Do you have something set as your exercise.checker in the top of your document?
  • Is there a small reproducible document with one exercise that I can have to look at on my end?

@schloerke
Copy link
Collaborator

Ok! I believe I found the disconnect.

testwhat::testwhat_learnr is defined here: https://github.com/datacamp/testwhat/blob/master/R/utils-learnr.R . As of this comment, https://github.com/datacamp/testwhat/blob/5a6b3b4914aa4dd050797cdbb598808936a8acf3/R/utils-learnr.R#L36 does not have envir_prep as learnr v0.10.0 has not been released to CRAN.

Once learnr is on CRAN, testwhat::testwhat_learnr should accept envir_prep as a param and should also be updated at https://github.com/datacamp/testwhat/blob/5a6b3b4914aa4dd050797cdbb598808936a8acf3/R/utils-learnr.R#L70 with something like

res <- run_until_fail(
  parse(text = check_code), 
  envir = learnr::duplicate_env(envir_prep)
)

This would be similar to gradethis::grade_learnr.

@schloerke schloerke added answered? Has the question been answered? and removed answered? Has the question been answered? labels Aug 1, 2019
@matthew-brett
Copy link
Author

Ah - so does that mean that envir_prep will apply to all exercises / checks / solutions? Is there a way to apply an environment for a subset of the checks / solutions?

@schloerke
Copy link
Collaborator

There will be a unique environment (envir_prep) for each exercise where the setup chunks have been executed. This environment is a copy of the environment before the student runs their code.

No. Currently the design of learnr is to isolate each exercise. There have been many requests to be able to change this behavior. This will not change in the near future.

@matthew-brett
Copy link
Author

Sorry for my lack of understanding. When you say 'unique environment', you mean a new environment, that is the same for each exercise, that has been recreated from scratch? If not, how do I customise the variables etc available for each exercise.

I'm not proposing to allow students' code executed in one exercise to affect the environment in another - only that to be able to share code across exercises, to configure the environment for the solution, and the check. It's just an extension of what you already have with e.g. exercise.setup="my-setup-chunk" to solutions and to checks. At simplest, in API terms, allowing exercise.setup= for solutions and checks. But it would be nice to specify something like question.setup="my-setup-chunk" and have it automatically apply to the exercise, the solution, and the check.

@schloerke
Copy link
Collaborator

@jcheng5

Should we alter how exercise checkers are evaluated?

Currently, we pass in a copy of the setup environment (envir_prep) (the envir before the user answer is executed) to the exercise.checker function to handle. Should we instead evaluate the checking function within the envir_prep value rather than having the exercise.checker function manage the environment for checking?

I don't know if this will lead to scoping issues. I am still inclined to have the exercise.checker manage the execution environment and it call the checking code within the envir_prep.

@reuning
Copy link

reuning commented Sep 30, 2019

I just want to add that I think I am running into this problem as well, and there is an added wrinkle. When I run my assignments on my own laptop everything runs smoothly but once I upload it to shinyapps I get errors that appear to be the result of the solution checker not having access to the dataset I've loaded in the setup.

I am also using testwhat_learnrchecker.

@schloerke
Copy link
Collaborator

After talking with @jcheng5, we believe it should be the exercise checker's responsibility.

This will be possible in the next release. testwhat will require an update to evaluate the checking within the envir_prep environment.

@matthew-brett
Copy link
Author

@schloerke - could you say more about how we can solve it using the stuff in the next release?

@schloerke
Copy link
Collaborator

I'll make a PR to testwhat updating testwhat::testwhat_learnr to handle the correct environment. There should be no change in your code once the PR is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
answered? Has the question been answered? difficulty: advanced Best for maintainers to address effort: medium < 3 days of work priority: high Must be fixed before next release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants