I am not sure what is going on with the image issue. I tried to edit the above but could not. Perhaps to much info.

The above photo

Electronic Assistant is a locally hosted AI interface I developed to integrate vision, speech, and language capabilities into a single workstation tool. It provides camera capture, image loading, browser/API access, conversational chat, engineering analysis modes, and microphone-driven voice interaction, all tied into a unified workflow. The system runs against the Qwen 2.5-14B large language model on a DGX Spark, which is significant because it delivers strong coding, reasoning, and multimodal performance while remaining practical for fully local deployment—allowing fast response, data privacy, and independence from cloud-only AI services.

I can change the Qwen 2.5-14B for a 72B model but it is a bit slower.

72B means the model has about 72 billion parameters.

In LLM terms:

Parameters = learned weights inside the neural network.

They encode language patterns, facts, reasoning heuristics, coding knowledge, etc.

More parameters generally → better nuance, reasoning depth, and contextual understanding (but higher compute cost).

So a 72B model is considered a large, high-capacity LLM — significantly more capable than small local models (7B–13B class) but still feasible on serious local hardware like DGX-class systems.

What can be done with all this is the ability to produce applications for music software that may not exist. If you think of something you would like to have, give me a heads up.

Billy


“Amazing! I’ll be working with Jaco Pastorius, Charlie Parker, Art Tatum, and Buddy Rich, and you’re telling me it’s not that great of a gig?
“Well…” Saint Peter, hesitated, “God’s got this girlfriend who thinks she can sing…”