Skip to content

Commit

Permalink
Update zig.parser benchmark program
Browse files Browse the repository at this point in the history
  • Loading branch information
tiehuis committed Jul 9, 2018
1 parent 410b4d9 commit 82e9190
Showing 1 changed file with 6 additions and 8 deletions.
14 changes: 6 additions & 8 deletions std/zig/bench.zig
Expand Up @@ -19,20 +19,18 @@ pub fn main() !void {
}
const end = timer.read();
memory_used /= iterations;
const elapsed_s = f64(end - start) / std.os.time.ns_per_s;
const bytes_per_sec = f64(source.len * iterations) / elapsed_s;
const elapsed_s = @intToFloat(f64, end - start) / std.os.time.ns_per_s;
const bytes_per_sec = @intToFloat(f64, source.len * iterations) / elapsed_s;
const mb_per_sec = bytes_per_sec / (1024 * 1024);

var stdout_file = try std.io.getStdOut();
const stdout = *std.io.FileOutStream.init(*stdout_file).stream;
try stdout.print("{.3} MB/s, {} KB used \n", mb_per_sec, memory_used / 1024);
const stdout = &std.io.FileOutStream.init(&stdout_file).stream;
try stdout.print("{.3} MiB/s, {} KiB used \n", mb_per_sec, memory_used / 1024);
}

fn testOnce() usize {
var fixed_buf_alloc = std.heap.FixedBufferAllocator.init(fixed_buffer_mem[0..]);
var allocator = *fixed_buf_alloc.allocator;
var tokenizer = Tokenizer.init(source);
var parser = Parser.init(*tokenizer, allocator, "(memory buffer)");
_ = parser.parse() catch @panic("parse failure");
var allocator = &fixed_buf_alloc.allocator;
_ = std.zig.parse(allocator, source) catch @panic("parse failure");
return fixed_buf_alloc.end_index;
}

1 comment on commit 82e9190

@andrewrk
Copy link
Member

@andrewrk andrewrk commented on 82e9190 Jul 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, since you're having a look at the self-hosted parsing code, here are some ideas I have for the future:

  • Now that we know we can use recursion (because of @newStackCall and eventual ability to statically determine stack upper bound and detect call graph cycles), rewrite the self-hosted parser in a more natural recursive way. The render code is already done this way.
  • Either switch back to ArrayList(*Node) or make ast nodes tagged unions and use SegmentedList(Node). Whichever is faster. With the tagged union strategy, we could set a maximum node size with comptime asserts, and if a node wants to exceed this it would have to have a pointer field other_fields with the other stuff allocated separately.
    • SegmentedList(*Node) - the current implementation - is kind of pointless because it's an unnecessary double pointer situation. However due to the non-recursive implementation of the parser, SegmentedList is handy because it lets you rely on the lifetime of items being permanent whereas ArrayList item pointer lifetimes expire on every append. I believe that with normal recursive descent this difference would not matter (we use ArrayList in the stage1 recursive descent parser).

Please sign in to comment.