-
Notifications
You must be signed in to change notification settings - Fork 81
Raw material
These documents are under a heavy rearrangement so there is plenty of text that has to be moved and reorganized. You can read about it on this page. But in the time this text will be moved from here.
The syntax name: type = value;
specifies that a variable named name
is of the type type
and is to receive the value value
. It was proposed by Sean Barrett. Some examples:
counter: int = 0;
name: string = "Jon";
average: float = 0.5 * (x+y);
If the type is omitted then the compiler infers it based on the value:
counter := 0; // an int
name := "Jon"; // a string
average := 0.5 * (x+y); // a float
If the value is omitted then you have a declaration without an initialization.
counter: int;
name: string;
average: float;
All of this is probably backward from what you’re used to, but the learning curve is shallow and you get used to it quickly. Function declarations look like this:
// A function that accepts 3 floats as parameters and returns a float value.
sum :: (x: float, y: float, z: float) -> float {
return x + y + z;
}
print("Sum: %\n", sum(1, 2, 3));
and structure declarations like this:
Vector3 :: struct {
x: float;
y: float;
z: float;
}
Arrays can be created like this:
a: [50] int; // An array of 50 integers
b: [..] int; // A dynamic array of integers
Arrays do not automatically cast to pointers as in C. Rather, they are “wide pointers” that contain array size information. Functions can take array types and query for the size of the array.
print_int_array :: (a: [] int) {
n := a.count;
for i : 0..n-1 {
print("array[%] = %\n", i, a[i]);
}
}
Retaining the array size information can help developers avoid the pattern of passing array lengths as additional parameters and assist in automatic bounds checking (see Walter Bright – C’s Biggest Mistake).
Suppose I want to write a function in C that converts a linear color value to sRGB. This involves the pow()
function, which is on the expensive side. We can avoid pow()
by doing the calculation ourselves instead and distributing the results as part of our program. So we write a table of values and return those.
#define SRGB_TABLE_SIZE 256
float srgb_table[SRGB_TABLE_SIZE] = { /* ... values here ... */ }
float linear_to_srgb(float f)
{
// Find the index in our table for this SRGB value,
// assuming f is in the range [0, 1]
int table_index = (int)(f * SRGB_TABLE_SIZE);
return srgb_table[table_index];
}
(Note: The above is bad code, only used for example. For better code, try stb_image_resize’s sRGB functions.) So far so good, except how will we get the values for the srgb_table? We can write another small program that outputs values. For example:
float real_linear_to_srgb(float f)
{
if (f <= 0.0031308f)
return f * 12.92f;
else
return 1.055f * (float)pow(f, 1 / 2.4f) - 0.055f;
}
#define SRGB_TABLE_SIZE 256
int main(int c, char* s) {
printf("float srgb_table[SRGB_TABLE_SIZE] = { ");
for (int i = 0; i < SRGB_TABLE_SIZE; i++)
printf("%f, ", real_linear_to_srgb((float)i/SRGB_TABLE_SIZE));
printf("}\n");
return 0;
}
We can compile this small program, which will output a table of sRGB values, and then we can copy the output into our actual program.
This is a big bucket of problems with it. For example, notice how SRGB_TABLE_SIZE
is defined twice, once in the actual program and once in the helper program. So we now have to maintain two separate source codes. This can get unwieldy for large programs.
In Jai, the same task looks like this:
generate_linear_srgb :: () -> [] float {
srgb_table: float[SRGB_TABLE_SIZE];
for srgb_table {
<< it = real_linear_to_srgb(cast(float)it_index / SRGB_TABLE_SIZE)
}
return srgb_table;
}
srgb_table: [] float = #run generate_linear_srgb(); // #run invokes the compile time execution
real_linear_to_srgb :: (f: float) -> float {
table_index := cast(int)(f * SRGB_TABLE_SIZE);
return srgb_table[table_index];
}
The #run
directive instructs Jai to run the function generate_linear_srgb()
at compile time. Jai’s compile time function execution runs the command at compile time and returns a table of values, which is then compiled directly into the binary as srgb_table
. When the program is run, the generate_linear_srgb()
function no longer exists. Only the table it generated exists, which is used by linear_to_srgb()
.
The compile-time function execution has very few limitations; in fact, you can run arbitrary code in your code base as part of the compiler. The first demonstration of Jai shows how to run an entire game as part of the compiler, and bake the data from the game into the program binary. (I hope #run invaders();
is shipped with the language.) The compiler builds the compile-time executed functions to a special bytecode language and runs them in an interpreter, and the results are funneled back into the source code. The compiler then continues as normal.
Here are some examples of things that a compile-time function could do:
- Compile-time asserts
- Run test cases
- Do code style checks
- Dynamically generate code and insert it to be compiled
- Insert build time data
- Download the OpenGL spec and build the most recent gl.h header file
- Contact a build server and retrieve/send build data
- Talk to your Mars probe on Mars and wait for the packets to come back and get a photo of what Mars looks like
All code begins its life in some kind of code block like this before moving on to be used in more general cases. Jai has some special syntaxes that can assist the programmer in moving code from specific cases out into general cases, to facilitate code reuse.
As an example, let’s say you’re writing some code like this:
draw_particles :: () {
view_left: Vector3 = get_view_left();
view_up: Vector3 = get_view_up();
for particles {
// Inside for loops the "it" object is the iterator for the current object.
particle_left := view_left * it.particle_size;
particle_up := view_up * it.particle_size;
// m is a global object that helps us build meshes to send to the graphics API
m.Position3fv(it.origin - particle_left - particle_up);
m.Position3fv(it.origin + particle_left - particle_up);
m.Position3fv(it.origin + particle_left + particle_up);
m.Position3fv(it.origin - particle_left + particle_up);
}
}
These mesh generation calls are actually a special case of some general quad rendering, so they can be factored out into another function so they can be used in other places. Jai makes this refactoring very straightforward. The first step is to enclose the code in a new scope with a special capture syntax.
particle_left := view_left * it.particle_size;
particle_up := view_up * it.particle_size;
origin := it.origin;
[m, origin, particle_left, particle_up] {
m.Position3fv(origin - particle_left - particle_up);
m.Position3fv(origin + particle_left - particle_up);
m.Position3fv(origin + particle_left + particle_up);
m.Position3fv(origin - particle_left + particle_up);
}
(Disclaimer: This step hasn’t been implemented yet. It’s one of the planned features.) The [m, origin, particle_left, particle_up]
notation is a capture that prevents any object not in the capture from being accessed inside the inner scope of the new bracket. Notice that we had to change it.origin
to origin
and add origin
to the capture list—it
is not captured and is unavailable inside the inner scope.
Captures help in refactoring code as we’re seeing here but they can also help in other ways. For example, when programmers are moving code from being singlethreaded to multithreaded, captures could enforce that only thread-local data is accessed. Captures are an insurance policy that the code inside the capture only reads or writes the state specified in the capture.
Now we’ve identified all of the parts of our code that depend on external things, so we’ve improved our code’s hygiene and made it easy to pull this code out into its own function. Now we want to continue so that we can use the quad drawing code in other places. So we create a function out of this block capture:
particle_left := view_left * it.particle_size;
particle_up := view_up * it.particle_size;
origin := it.origin;
() [m, origin, particle_left, particle_up] {
m.Position3fv(origin - particle_left - particle_up);
m.Position3fv(origin + particle_left - particle_up);
m.Position3fv(origin + particle_left + particle_up);
m.Position3fv(origin - particle_left + particle_up);
} (); // Call the function
Notice how the only change we needed to make was to add the function syntax ()
. The capture remained intact. So we went from a blocked capture to a function with very little effort. Now if we like we can move the vectors to be function parameters:
(origin: Vector3, left: Vector3, up: Vector3) [m] {
m.Position3fv(origin - left - up);
m.Position3fv(origin + left - up);
m.Position3fv(origin + left + up);
m.Position3fv(origin - left + up);
}
With parameter names we’re able to change the names of the variables inside the function’s scope to match their new function. Now we can use this function to draw any type of quad, not just particles. The capture retains m
because it is a global object that doesn’t need to be passed as a parameter. And now we have an anonymous, locally scoped function that can be used in our draw code:
draw_particles :: () {
view_left: Vector3 = get_view_left();
view_up: Vector3 = get_view_up();
for particles {
particle_left := view_left * it.particle_size;
particle_up := view_up * it.particle_size;
(origin: Vector3, left: Vector3, up: Vector3) [m] {
m.Position3fv(origin - left - up);
m.Position3fv(origin + left - up);
m.Position3fv(origin + left + up);
m.Position3fv(origin - left + up);
} (origin, particle_left, particle_up); // Call the function with the specified parameters
}
}
Anonymous functions are useful for passing as arguments to other functions, and this syntax makes them easy to create and manipulate. The next step is to give our function a name:
draw_quad :: (origin: Vector3, left: Vector3, up: Vector3) [m] {
m.Position3fv(origin - left - up);
m.Position3fv(origin + left - up);
m.Position3fv(origin + left + up);
m.Position3fv(origin - left + up);
}
draw_quad(origin, particle_left, particle_up);
Now we could call it multiple times in the local scope, if we like. But we want to access our quad drawing function from the global scope. Moving the function out of the local scope requires zero changes to the function’s code:
draw_quad :: (origin: Vector3, left: Vector3, up: Vector3) [m] {
m.Position3fv(origin - left - up);
m.Position3fv(origin + left - up);
m.Position3fv(origin + left + up);
m.Position3fv(origin - left + up);
};
draw_particles :: () {
view_left: Vector3 = get_view_left();
view_up: Vector3 = get_view_up();
for particles {
particle_left:= view_left * it.particle_size;
particle_up:= view_up * it.particle_size;
draw_quad(particle_left, particle_up, origin);
}
}
The strength of Jai’s function syntax is that it doesn’t change whether the function is an anonymous function, a local function (i.e. lives inside the scope of another function) a member function of a class or a global function. This is in contrast to in C++, where a local function is called a lambda, and has completely different syntax than a member function, which must have a class name and ::
etc, which is slightly different syntax than a global function which has no class name or ::
. The result is that as code matures and moves from a local context to a global context, the work of refactoring can be done with minimal edits.
Here is Jai’s code maturation process in full:
{ ... } // Anonymous code block
[capture] { ... } // Captured code block
(i: int) -> float [capture] { ... } // Anonymous function
f :: (i: int) -> float [capture] { ... } // Named local function
f :: (i: int) -> float [capture] { ... } // Named global function
All information for building a program is contained within the source code of the program. Thus there is no need for a make
command or project files to build a Jai program. As a simple example:
build :: () {
build_options.executable_name = "my_program";
print("Building program '%'\n", build_options.executable_name);
build_options.optimization_level = Optimization_Level.DEBUG;
build_options.emit_line_directives = false;
update_build_options();
// Jai will automatically build any files included with the #load directive,
// but other files can also be manually added.
add_build_file("misc.jai");
add_build_file("checks.jai");
}
#run build();
When the program is built, the #run directive runs build() at compile-time. Then build()
establishes all of the build options for this project. No external build tools are required, all build scripting is done within Jai, and in the same environment of the rest of the code.
SOA AND AOS
Modern processors and memory models are much faster when spatial locality is adhered to. This means that grouping together data that is modified at the same time is advantageous for performance. So changing a struct from an array of structures (AoS) style:
struct Entity {
Vector3 position;
Quaternion orientation;
// ... many other members here
};
Entity all_entities[1024]; // An array of structures
for (int k = 0; k < 1024; k++)
update_position(&all_entities[k].position);
for (int k = 0; k < 1024; k++)
update_orientation(&all_entities[k].orientation);
to a structure of arrays (SoA) style:
struct Entity {
Vector3 positions[1024];
Quaternion orientations[1024];
// ... many other members here
};
Entity all_entities; // A structure of arrays
for (int k = 0; k < 1024; k++)
update_position(&all_entities.positions[k]);
for (int k = 0; k < 1024; k++)
update_orientation(&all_entities.orientations[k]);
can improve performance a great deal because of fewer cache misses.
However, as programs get larger, it becomes much more difficult to reorganize the data. Testing whether a single, simple change has any effect on performance can take the developer a long time, because once the data structures must change, all of the code that acts on that data structure breaks. So Jai provides mechanisms for automatically transitioning between SoA and AoS without breaking the supporting code. For example:
Vector3 :: struct {
x: float = 1;
y: float = 4;
z: float = 9;
}
v1 : [4] Vector3; // Memory will contain: 1 4 9 1 4 9 1 4 9 1 4 9
Vector3SOA :: struct SOA {
x: float = 1;
y: float = 4;
z: float = 9;
}
v2 : [4] Vector3SOA; // Memory will contain: 1 1 1 1 4 4 4 4 9 9 9 9
Getting back to our previous example, in Jai:
Entity :: struct SOA {
position : Vector3;
orientation : Quaternion
// .. many other members here
}
all_entities : [4] Entity;
for k : 0..all_entities.count-1
update_position(&all_entities[k].position);
for k : 0..all_entities.count-1
update_orientation(&all_entities[k].orientation);
Now the only thing that needs to be changed to convert between SoA and AoS is to insert or remove the SOA
keyword at the struct definition site, and Jai will work behind the scenes to make everything else work as expected.
Jai stores a table of all type information in the data segment of each compiled program. It can be examined like this:
for _type_table {
// it is the iterator, it is the Type being examined. it_index is the iteration index, it is an integer
print("%:\n", it_index);
print(" name: %\n", it.name);
print(" type: %\n", it.type); // type is an enum, INTEGER, FLOAT, BOOL, STRUCT, etc
}
Full introspection data is available for all structs, functions, and enums. For example, a procedure may look something like this:
print("% (", info_procedure.name);
for info_procedure.argument_types {
print_type(it);
if it_index != info_procedure.argument_types.count-1 then print(", ");
}
print(") ->");
print_type(info_procedure.return_type);
The preceding code could print something like get_name(id : uint32) -> string
. An enum can be examined like this:
Hello :: enum u16 {
FIRST,
SECOND,
THIRD = 80,
FOURTH,
}
for Hello.names {
print("Name: % value: %\n", Hello.names[it_index], Hello.values[it_index]);
}
Reflection data such as this can be used to write serialization procedures, commonly used e.g. in network replication of entities and save game data. Current C/C++ methods for this involve heavy use of operator overloading and preprocessor directives.
FUNCTION POLYMORPHISM
Jai’s primary polymorphism mechanism is at the function level, and is best described with an example.
sum :: (a: $T, b: T) -> T {
return a + b;
}
f1: float = 1;
f2: float = 2;
f3 := sum(f1, f2);
i1: int = 1;
i2: int = 2;
i3 := sum(i1, i2);
x := sum(f1, i1);
print("% % %\n", f3, i3, x); // Output is "3.000000 3 2.000000"
When sum()
is called, the type is determined by the T
which is preceded by the $
symbol. In this case, $
precedes the a
variable, and so the type T
is determined by the first parameter. So, the first call to sum()
is float + float, and the second call is int + int. In the third call, since the first parameter is float, both parameters and the return value become float. The second parameter is converted from int to float, and the variable x
is deduced to be float as well.
THE ANY TYPE
Jai has a type called Any
, which any other type can be implicitly casted to. Example:
print_any :: (a: Any) {
if a.type.type == Type_Info_Tag.FLOAT
print("a is a float\n");
else if a.type.type == Type_Info_Tag.INT
print("a is an int\n");
}
BAKING
… this section is not written yet! Sorry. (The #bake
directive emits a function with a combination of arguments baked in. For example, #bake sum(a, 1)
becomes equivalent to a += 1
.)
Jai does not and will never feature garbage collection or any kind of automatic memory management.
STRUCT POINTER OWNERSHIP
Marking a pointer member of a struct with !
indicates that the object pointed to is owned by the struct and should be deleted when the struct is deallocated. Example:
node :: struct {
owned_a : !* node = null;
owned_b : !* node = null;
}
example: node = new node;
example.owned_a = new node;
example.owned_b = new node;
delete example; // owned_a and owned_b are also deleted.
Here, owned_a
and owned_b
are marked as being owned by node
, and will be automatically deleted when the node is deleted. In C++ this is accomplished through a unique_ptr<T>
, but Jai considers this the wrong way to do it, because the template approach now masks the true type of the object. A unique_ptr<node>
is no longer a node
—it’s a unique_ptr
masquerading as a node
. It’s preferable to retain the type of node*
, and retain the properties of node*
-ness that go along with it, because we don’t really actually care about unique_ptr
for its own sake.
LIBRARY ALLOCATORS
… this section is not written yet! Sorry. (Jai provides mechanisms for managing the allocations of an imported library without requiring work from the library writers.)
INITIALIZATION
Member variables of a class are automatically initialized.
Vector3 :: struct {
x: float;
y: float;
z: float;
}
v : Vector3;
print("% % %\n", v.x, v.y, v.z); // Always prints "0 0 0"
You can replace these with default initializations:
Vector3 :: struct {
x: float = 1;
y: float = 4;
z: float = 9;
}
v : Vector3;
print("% % %\n", v.x, v.y, v.z); // Always prints "1 4 9"
va : [100] Vector3; // An array of 100 Vector3
print("% % %\n", va[50].x, va[50].y, va[50].z); // Always prints "1 4 9"
Or you can block the default initialization:
Vector3 :: struct {
x: float = ---;
y: float = ---;
z: float = ---;
}
v : Vector3;
print("% % %\n", v.x, v.y, v.z); // Undefined behavior, could print anything
You can also block default initialization at the variable declaration site:
Vector3 :: struct {
x: float = 1;
y: float = 4;
z: float = 9;
}
v : Vector3 = ---;
print("% % %\n", v.x, v.y, v.z); // Undefined behavior, could print anything
va : [100] Vector3 = ---;
print("% % %\n", va[50].x, va[50].y, va[50].z); // Undefined behavior, could print anything
By explicitly uninitializing variables rather than explicitly initializing variables, Jai hopes to reduce cognitive load while retaining the potential for optimization.
INLINING
test_a :: () { /* ... */ }
test_b :: () inline { /* ... */ }
test_c :: () no_inline { /* ... */ }
test_a(); // Compiler decides whether to inline this
test_b(); // Always inlined due to "inline" above
test_c(); // Never inlined due to "no_inline" above
inline test_a(); // Always inlined
inline test_b(); // Always inlined
inline test_c(); // Always inlined
no_inline test_a(); // Never inlined
no_inline test_b(); // Never inlined
no_inline test_c(); // Never inlined
Additionally, there exist directives to always or never inline certain procedures, to make it easier to inline or avoid inline conditionally depending on the platform.
test_d :: () { /* ... */ }
test_e :: () { /* ... */ }
#inline test_d // Directive to always inline test_d
#no_inline test_e // Directive to never inline test_e
Things that C/C++ should have had a long time ago:
- Multi-line block comments
- Nested block comments
- Specific data types for 8, 16, and 32 bit integers
- No implicit type conversions
- No header files
-
.
operator for both struct membership and pointer dereference access—no more->
- A
defer
statement, similar to that of Go
Here’s a short list of planned features to be added to Jai.
- Automatic build management—the program specifies how to build it
- Captures
- LLVM integration
- Automatic versioning (see below)
- A better concurrency model
- Named argument passing
- A permissive license
Jai will not have:
- Smart pointers
- Garbage collection
- Automatic memory management of any kind
- Templates or template metaprogramming
- RAII
- Subtype polymorphism
- Exceptions
- References
- A virtual machine (at least, not usually—see below)
- A preprocessor (at least, not one resembling C’s—see below)
- Header files
If it sounds odd to you that Jai is a modern high-level language but does not have some of the above features, then consider that Jai is not trying to be as high-level as Java or C#. It is better described as trying to be a better C. It wants to allow programmers to get as low-level as they desire. Features like garbage collection and exceptions stand as obstacles to low-level programming.
These documents were verified using Grammarly free browser plugin for Chrome. Please use some spell checker before submitting new content.
- Variables and assignments
- Language data types
- Simple user-defined data types
- Expressions and operators
- Type-casting
- Pointers
- Declarations
- Arguments / Parameters
- Return values
- Overloading / Polymorhism
- Advanced features
- Lambdas
- Arrays
- Strings
- Composition of Structs
- Metaprogramming
- Templates / Generics