Liste des Groupes | Revenir à c arch |
On 2025-02-24 12:28 p.m., Michael S wrote:It is likely not worth it to invest effort into things that one effectively can't distribute either.On Mon, 24 Feb 2025 11:52:38 -0500Respecting I do not know that much about the work environment of FPGA developers:
EricP <ThatWouldBeTelling@thevillage.com> wrote:
>Michael S wrote:>On Sun, 23 Feb 2025 11:13:53 -0500>
EricP <ThatWouldBeTelling@thevillage.com> wrote:It looks to me that Vivado intends that after you get your basic>
design working, this module optimization is *exactly* what one is
supposed to do.
>
In this case the prototype design establishes that you need
multiple 64-bit adders and the generic ones synthesis spits out
are slow. So you isolate that module off, use Verilog to drive the
basic LE selections, then iterate doing relative LE placement
specifiers, route the module, and when you get the fastest 64-bit
adder you can then lock down the netlist and save the module
design.
>
Now you have a plug-in 64-bit adder module that runs at (I don't
know the speed difference between Virtex and your Spartan-7 so
wild guess) oh, say, 4 ns, to use multiple places... fetch,
decode, alu, agu.
>
Then plug that into your ALU, add in SUB, AND, OR, XOR, functions,
isolate that module, optimize placement, route, lock down netlist,
and now you have a 5 ns plug-in ALU module.
>
Doing this you build up your own IP library of optimized hardware
modules.
>
As more and more modules are optimized the system synthesis gets
faster because much of the fine grain work and routing is already
done.
>
It sounds like your 1st hand FPGA design experience is VERY
outdated.
Never have, likely never will.
Nothing against them - looks easier than wire-wrapping TTL and 4000
CMOS. Though people do seem to spend an awful lot of time working
around certain deficiencies like the lack of >1 write ports on
register files, and the lack of CAM's. One would think market forces
would induce at least one supplier to add these and take the fpga
market by storm.
>
Your view is probably skewed by talking to soft core hobbyists.
Please realize that most professionals do not care about
high-performance soft core. Soft core is for control plane functions
rather than for data plane. Important features are ease of use,
reliability, esp. of software tools and small size. Performance is
rated low. Performance per clock is rated even lower. So, professional
do not develop soft cores by themselves. And OTS cores that they use
are not superscalar. Quite often not even fully pipelined.
It means, no, small SRAM banks with two independent write ports is not
a feature that FPGA pros would be excited about.
>Also fpga's do seem prone to monopolistic locked-in pricing>
(though not really different from any relational database vendor).
Cheap Chinese clones of X&A FPGAs from late 2000s and very early 2010s
certainly exist. I didn't encounter Chinese clones of slightly newer
devices, like Xilinx 7-series. But I didn't look hard for them. So,
wouldn't be surprised if they exist, too.
Right now, and almost full decade back, neither X nor A cares about low
end. They just continue to ship old chips, mostly charging old price or
rising a little.
>At least with TTL one could do an RFQ to 5 or 10 different suppliers.>
>
I'm just trying to figure out what these other folks are doing to get
bleeding edge performance from essentially the same tools and similar
chips.
>
I assume you are referring to the gui IDE interface for things like
floor planning where you click on a LE cells and set some attributes.
I also think I saw reference to locking down parts of the net list.
But there are a lot of documents to go through.
>
No, I mean florplanning, as well as most other manual physical-level
optimization are not used at all in 99% percents of FPGA designs that
started after year 2005.
>
I have thought of FPGAs as more of a prototyping tool, or to be used in one-off designs, proof-of-concept type things. In those cases one probably does not care too much about manual operations, as was said one would be more interested in productivity of developers that comes from reliable tools and being able to deal with things at a high level.
The vendor’s have a number of pre-made components that can be plugged into a design making it possible to sketch out a design very quickly with a couple of caveats. One being one might be stuck to a particular vendor.Yeah.
CAMs can easily be implemented in FPGAs although they may have multi- cycle latency. One has only to research CAM implementation in FPGAs. Register files with multiple ports are easily implemented with replication. It may be nice to see a CAM component in a vendor library. Register files sometimes have bypassing requirements that might make it challenging to develop a generic component.CAMs exist in higher-end FPGAs.
If one is concerned about performance of a one-off, simply buy a chip with double the performance. That would probably be a lot less expensive than implementing everything manually.I was more going for local optimum, which kinda led me to where I was...
In the past I have managed to purchase FPGA boards with a higher-speed grade or higher capacity part for additional $$$, as long as the footprint was the same. It does not hurt to ask, as it indicates demand.
If it is going to be a high-volume design, it may be implemented again as custom logic.
From a hobbyist perspective, being able to go down to micro-detail is great. It is possible to get significantly better performance that way, when it is not possible to obtain a better chip.
Les messages affichés proviennent d'usenet.