Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor how fabric data is shared between parts #1475

Closed
litghost opened this issue Oct 29, 2020 · 11 comments · Fixed by #1539
Closed

Refactor how fabric data is shared between parts #1475

litghost opened this issue Oct 29, 2020 · 11 comments · Fixed by #1539

Comments

@litghost
Copy link
Contributor

litghost commented Oct 29, 2020

Before adding data for all parts (e.g. pinouts), a refactoring of how fabric data is shared is required. For terminology:

  • Fabric
    • The interior tile and routing layout within a FPGA silicon
  • Device
    • The name of the fabric used by Vivado
  • Package
    • A standardized pin layout that can be used with one or more devices.
  • Part
    • A device wire bonded to a package
  • Family
    • A collection of devices and parts that share some common traits.

Currently prjxray-db only has families (e.g. artix7, kintex7, zynq7) and parts (e.g. xc7a35tcpg236-1). Because prjxray-db is missing the explicit idea of a device and fabric, fabric data is currently duplicated between parts that have the same device. To be very specific:

  • The fabric for a part is defined in prjxray-db/<family>/<part> with the following files:
    • node_wires.json
    • tilegrid.json
    • tileconn.json

Because some parts share fabrics with multiple parts, these files are copied between the parts. This is wasteful. The solution is to:

  • Add a small file describing:
    • The list of parts in each family
    • The device for each part
    • The speed grades for each part
    • The fabric used in each device

Then the fabric data (e.g. tilegrid.json, etc) should be moved to fabric folder rather than a part folder. The prjxray library should be updated to transparently handle the fabric/device split from the part.

This change is required before adding part data for all parts.

@litghost litghost changed the title Refactor how shared fabric data is shared between parts Refactor how fabric data is shared between parts Oct 29, 2020
@hansfbaier
Copy link
Collaborator

In order to help with this, where are the data sources they are derived from?
datasheets, scripts, parts of vivado?

@litghost
Copy link
Contributor Author

When Vivado opens a design, it has both the part, the device and package. Here is a small TCL example:

link_design -part xc7a35tcpg236-1
set design [get_design]
set part [get_parts [get_property PART $design]]
set package [get_property PACKAGE $part]
set device [get_property DEVICE $part]

From the device you can find all of the packages for that device:

get_parts -filter "DEVICE == xc7a35t"

You can also just get all parts that Vivado supports, but you'd want to filter by family:

get_parts -filter "FAMILY == artix7"

@hansfbaier
Copy link
Collaborator

hansfbaier commented Dec 18, 2020

@litghost Where do we get the fabric from? I could not find a tcl property which relates to that.

@dnltz
Copy link
Contributor

dnltz commented Dec 27, 2020

@hansfbaier - Did you start working on this issue?

@hansfbaier
Copy link
Collaborator

@dnltz No, after playing the tcl commands for a while I figured out that I still lack quite a bit of knowledge. I now try start smaller and add two FPGAs to enhance my understanding of how everything works

@hansfbaier
Copy link
Collaborator

hansfbaier commented Dec 29, 2020

@litghost , @dnltz Yesterday I was trying to add a new part (from 2am to 6pm), and now I understand why this ticket exists. 32GB RAM do not seem to be enough for create_node_tree in 071-dump_all and it froze my system (even without swap space). I will try and do a proof of concept for 071-dump_all to use sqlite3 instead of json. Using that I could totally deduplicate (eg normalize) the data, so that everything is stored exactly once. sqlite seems to do well on large data sets: https://stackoverflow.com/questions/1033309/sqlite-for-large-data-sets

@hansfbaier
Copy link
Collaborator

@dnltz When do you have time to work on it?

@dnltz
Copy link
Contributor

dnltz commented Jan 4, 2021

@hansfbaier - I started yesterday to set up the project. Not sure if you got further. I neither want to do the same work twice nor "steal" you the PR. Do you want to finish or should I try?

@hansfbaier
Copy link
Collaborator

hansfbaier commented Jan 4, 2021

@dnltz Please go ahead. I still have a very steep learning curve ahead of me, and currently it looks like I first want to learn the ins and outs of litex and get more familiar with the details of FPGAs.
I looked a bit into what it would take to switch to a sqlite3 based data store.
I think it would be a great fit, because the data is very relational (mappings are relations),
if you think that is a good idea.

Actually it is very easy to use sqlite3 from vivado:

$ sudo apt install libsqlite3-tcl
$ ln -s /usr/lib/tcltk/sqlite3  ~/.Xilinx/Vivado/2017.2/XilinxTclStore/tclapp/sqlite3

as soon as the symlink into the local tcl store is in place, it is possible to use sqlite3 from the vivado tcl shell.
I also learned that it is possible for multiple parallel processes to write into the same sqlite3 database file,
so it looks like it would be feasible to run the fuzzers in parallel and have them write into the same database.

@litghost
Copy link
Contributor Author

litghost commented Jan 5, 2021

@litghost Where do we get the fabric from? I could not find a tcl property which relates to that.

So as a first pass you can treat device as equivalent to fabric, as that is generally true. Some exceptions exist, specifically we know that the xc7a50 and xc7a35 fabrics appear to be identical, and we alias the a50 fabric data onto the a35 devices.

One thing we could do is simply output all of the artix7 devices and then compare the outputs to verify that several devices are actually the same fabric. Does that make sense?

@hansfbaier
Copy link
Collaborator

@dnltz Great work! I could not have done that with my current knowledge. Good that I got out of the way :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants