Since perfection isn't the goal, a spectrum of possibilities might be reasonable.
It might be interesting to consider some things about accurate wood ID. Consider this from wood ID expert Harry Alden who does wood ID for a living:
Overview:
https://wood-identification.com/
Wood ID basics:
https://wood-identification.com/page/
Interesting specifics:
https://wood-identification.com/wood-types/
The interview video, under the "Links" heading well describes some of the issues. It's long but the section at and following 48:00 might be helpful.
I hate to sound discouraging: developing an algorithmic solution is certainly a noble undertaking. IMO: except for some very common and well-known wood, even good photos of the side of a board or the finished surface are not going to provide a universally reliable ID, at best a guess, whether examined by a person or an algorithm. And the guess is easier if the piece is held in the hand by the person. And FAR easier if the provenance can be determined. No problem, of course, if the target audience is happy with opinions and guesses. But they can get those at the local woodturning club.
It will be interesting to follow your progress over the next few months or years.
As for common names of wood, he points out it is not uncommon for a single species to have 100s if common names, locally and internationally.
It is always a guess, though.

Even when it is a human giving you an ID, if it is an obscure enough wood or strange enough exemplar that you actually need someone to ID it for you, it is most often going to be a guess. Everything you have stated so far, explains why that is the case.
The thing with an AI-powered option, is that overall once trained well enough, it should be able to "guess" better more frequently. Not everyone has a wood expert handy a few houses down the road, or even at a local woodworking club. I certainly don't. I've taken many pieces of woods I can't identify to the local Woodcraft and Rockler, where there are (or have been) knowledgable guys, and its still almost always a guess.
I've been programming for about 35 years, and FWIW this is not really an "algorithmic" solution. Not in any classical sense... Use of an NN-based model, while algorithms are involved, is more like how a human identifies things, than being a strictly ridgid, unyeilding, unbending algorithm that processes things in a specific manner every time. The benefit of an AI based approach is that as the technologies underpinning this kind of technology improve, which is almost daily with monstrous advancements every few months or so, the effectiveness of the model can constantly be revised and refined to get better results, without really having to change the bulk of the product overall. I've extracted this small core of functionality, Python code that leverages FAISS and a few other py libs for performing AI/ML work and math (namely vectorization and vector math), out from the bulk of the product. So as things change and improve, I can redesign and rebuild that core, so long as it maintains the same general API for the rest of the app to plug into, and its capabilities can improve over time.
There is also the whole dimensionality aspect. One of the reasons current LLMs are so good at producing reasonable to excellent answers the vast majority of the time (they aren't perfect, but they are DARN GOOD), is because they operate in a dimensional space far beyond the reality we know and understand. Instead of three dimensions, its hundreds to thousands or tens of thousands. With very high dimensionality, accuracy improves. I'm currently working with 512 or 1024 dimensions, but that is pushing my computer, and support for higher dimensionality currently costs a fair amount. However at some point these things will become simple commodities, and I suspect I should be able to push dimensionality much farther, which should solve some of the accuracy issues on its own.
As for naming and all that. I still think a community-based approach that allows users of the system to contribute details, can help suss out nuances and improve descriptions over time. Again, this is never going to be perfect, I have always known that. It is not perfect now when you ask humans...I've asked many times across a few different woodworking forums for wood ids. Its usually a guess, and in the end most of the time, I end up going with the "consensus" of many replies in the end. However, there are aspects of an AI model that I think can help here, and improve the guesses in the long run. In fact, if/when I am able to figure out how to train an effective model with images, I think adding a model trained on text-based knowledge that describes wood characteristics could also be used to hone results, as well as produce useful descriptions to users who query for wood ids. If community-sourced knowledge could be factored into that additional model over time, that too could improve the IDs of woods.
Anyway, this is just an experimental project at this point. The primary need ATM is source material to train the visual model on. The core functionality is really not that much code. I mean, I think I originally wrote about 250 lines of Python with a few third-party libraries like FAISS, to get the initial core AI service working. It may have grown to 300-400 lines now. In any case, its not much code. Most of this is just relying on existing OSS libraries for LLMs, ML, AI, and math. As much as I dislike Python as a language, the sheer volume of available libraries for this kind of work is phenomenal, and greatly reduced the effort required to actually write the code. The real work has been training the model, which demands data, and gathering the data, cataloging it along with the necessary associated metadata. Gathering images, and doing the work to associate them with SOME kind of wood description, has been by and far the VAST majority of the effort (and what may ultimately kill the effort in the long run...I guess it depends on just how much work it becomes to maintain the source material for trainings). This is of course, limited by my own knowledge and my own ability to source material for training. There are good resources out there (and now I'll be adding wood-identification.com as a potential source for metadata and knowledge to determine that metadata) that I rely on, but in the long run, I think it will ultimately take a broader scale community effort to really break down the metadata and associations with various wood images, to produce a more optimal model. It'll be some time before I am able to build a starter model good enough, to even allow a product to be released or a community to form though, so, its just ongoing work right now: find images, filter them to best exemplars, find and associate closest relevant metadata, repeat for many woods, etc. and periodically rebuild the model.