Title: | Tools for Reading, Tokenizing and Parsing R Code |
---|---|
Description: | Tools for Reading, Tokenizing and Parsing R Code. |
Authors: | Kevin Ushey |
Maintainer: | Kevin Ushey <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.1.7-9000 |
Built: | 2024-11-05 04:27:11 UTC |
Source: | https://github.com/kevinushey/sourcetools |
Read the contents of a file into a string (or, in the case of
read_lines
, a vector of strings).
read(path) read_lines(path) read_bytes(path) read_lines_bytes(path)
read(path) read_lines(path) read_bytes(path) read_lines_bytes(path)
path |
A file path. |
Discover and register native routines in a package. Functions to be registered should be prefixed with the '// [[export(<methods>)]]' attribute.
register_routines(package = ".", prefix = "C_", dynamic.symbols = FALSE)
register_routines(package = ".", prefix = "C_", dynamic.symbols = FALSE)
package |
The path to an R package. |
prefix |
The prefix to assign to the R objects generated that map to each routine. |
dynamic.symbols |
Boolean; should dynamic symbol lookup be enabled? |
Tools for tokenizing R code.
tokenize_file(path) tokenize_string(string) tokenize(file = "", text = NULL)
tokenize_file(path) tokenize_string(string) tokenize(file = "", text = NULL)
file , path
|
A file path. |
text , string
|
R code as a character vector of length one. |
A data.frame
with the following columns:
value |
The token's contents, as a string. |
row |
The row where the token is located. |
column |
The column where the token is located. |
type |
The token type, as a string. |
Line numbers are determined by existence of the \n
line feed character, under the assumption that code being tokenized
will use either \n
to indicate newlines (as on modern
Unix systems), or \r\n
as on Windows.
tokenize_string("x <- 1 + 2")
tokenize_string("x <- 1 + 2")
Find syntax errors in a string of R code.
validate_syntax(string)
validate_syntax(string)
string |
A character vector (of length one). |