Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

analyzer: support large data #651

Open
sbourdeauducq opened this issue Jan 2, 2017 · 6 comments
Open

analyzer: support large data #651

sbourdeauducq opened this issue Jan 2, 2017 · 6 comments

Comments

@sbourdeauducq
Copy link
Member

sbourdeauducq commented Jan 2, 2017

The analyzer record format doesn't have enough data bits for SAWG events.

Fixing this requires defining a new record format that ideally:

  • is efficient (supports variable length records so that e.g. TTL events with 1 bit of data do not use the full 512 bits of memory reserved for the data)
  • preserves transaction atomicity (so that data flushed out of the FIFO cannot cause corruption - instead the full record corresponding to the lost data is dropped)

Then it needs to be implemented in gateware and in artiq_coreanalyzer.

@whitequark
Copy link
Contributor

whitequark commented Jan 2, 2017

So a self-synchronizing variable-length encoding? Something like this:

  • 0by0xxxxxx: 6 data bits
  • 0by10xxxxx: 5 data bits
  • 0by110xxxx: 4 data bits
  • 0by1110xxx: 3 data bits
  • 0by11110xx: 2 data bits
  • 0by111110x: 1 data bit
  • 0by1111110: 0 data bits

where y indicates whether this is the initial byte or continuation byte, and a 0 data bit byte indicating the end of the sequence.

@jordens
Copy link
Member

jordens commented Jan 2, 2017

It would be nice if DMA (input and output), Analyzer, and DRTIO all three could use the same (or at least very similar) serialization formats.

@whitequark y?

@sbourdeauducq
Copy link
Member Author

There cannot be a lot in common, except for trivialities (e.g. the order of those few fields that are common), and more importantly for the way the variable length is communicated.

@whitequark
Copy link
Contributor

@jordens yes, y.

@sbourdeauducq
Copy link
Member Author

sbourdeauducq commented Jan 2, 2017

And the analyzer is the only one that requires the variable length encoding to be self-synchronizing, so it's not clear that even that should be shared.

@whitequark
Copy link
Contributor

Well, it is definitely less error-prone to have one variable-length encoder than two, on both sides.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants