The more I think about it, the less sense this makes to me: you get none of the advantages of a structured database, powerful SQL queries, data integrity constraints, etc; but you have all the cost of the DBMS sitting there basically unused, and have to write all the tools for manipulating the data yourself.
If there were no systems available for schemaless document stores, this might be a way of writing a prototype for one, but there are - why build a MongoDB clone on top of Postgres when you could just use MongoDB? Perhaps as an abstract project, some kind of hybrid might make sense, but I'd have thought beyond prototyping it would make sense to fork Postgres and rip out the SQL rather than having all that complexity lying unused.
On a practical level, I'm not sure how you're intending foreign keys to work; it sounds like columns which happen to be foreign keys would remain as real columns, but any other columns would be mashed into the JSON document. That would mean that to retrieve the data, you'd still need to hand-craft the SQL with JOIN statements, but then have an additional layer as well to manipulate the fields inside the JSON (e.g. to filter by them). Or perhaps you would hard-code the JSON manipulation into functions in the SQL expression, in which case you might as well just have a normal schema.
If your primary concern with a traditional schema is the cost of changing them once running, perhaps you should be more concerned about the middleware or ORM layer which you need to isolate the schema from the rest of your application. If you have a "schemaless" structure, each row can effectively have a different schema (structure inside the JSON blob) so the application will need to cope with all past versions of the structure for an item type. But if you have multiple tables with defined foreign keys, the wrapper will also need to isolate changes to those, such as tables being created or new relationships being defined, which is basically what you'd need for a fully Relational schema.