Distillation can extract the knowledge from an existing model into a newly trained one. That doesn't solve the cost problem, but costs are steadily coming down.
That's still a crude repurposement of an inscrutable artifact. Open source requires you to share the source data from which that artifact (the model parameters) was created.
That's still a crude repurposement of an inscrutable artifact. Open source requires you to share the source data from which that artifact (the model parameters) was created.