Wrap your SQL into HTTP REST endpoints automagically ^_^

  • Howdy guys, I'm looking for BETA testers for the following thing.

    https://www.youtube.com/watch?v=hufLFulYjVI

    It basically allows you to automatically generate HTTP REST endpoints, wrapping your database tables, and/or custom SQL statements. If you want to test it, I would be eternally grateful, and if you (God forbid) find a (severe) bug in it, I'll throw you a free license (it's proprietary and a commercial product I'm trying to make a living out of)

    To download it locally and play around with, you can visit the following website.

    https://polterguy.github.io/

    Thank you for your time ^_^

  • It's an interesting approach.  There are a few questions that jump out tho.  There's no mention of how DML statements like INSERT/UPDATE/DELETE are handled?   Could it execute stored procedures?  How to bind query output to CLR objects?  What is the endpoint deployment model?   Non-technically, who is the target audience for this?  What existing products do you see as competitors?

    There was a forum post a while back with a sort of similar type of application.

    https://www.sqlservercentral.com/forums/topic/are-there-application-developers-in-here-where-are-you-doing-your-sql

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Hi Steve, and thank you for taking your time, and coming up with interesting questions, I'll try to answer as intelligently as I can 🙂

    The queries ends up wrapping a DataReader in .Net 5, which although technically are created to handle SELECT statements better, still works perfectly for also INSERT/UPDATE/DELETE statements, but you can change these bindings manually in code afterwards if you wish, to use Execute and Scalar invocations on the SqlCommand/MySqlCommand.

    As to Stored Procedure, I'm using it in my day job to execute a SP that returns a record set, which I'm then iterating over, to invoke HTTP POST invocations towards MailChimp (importing emails), executing this in a scheduled task once per day, so this should be painless. As to binding queries to CLR, it's an abstraction on top of the CLR (think YAML bindings towards C#), so it returns "YAML kind of data" (internally within C# it's a name/value/children class called "Node"), so the question isn't really relevant.

    However, it does bind towards SqlConnection/MySqlConnection and SqlCommand/MySqlCommand, and DataReaders, dynamically retrieving all rows and columns, creating a "Node" instance, resulting in the record set, that's later converted to JSON when returned to the client over the HTTP connection.

    As to the deployment model of the app itself, I could arguably do a much better job, and I will probably create Azure Resource Templates and Docker containers and such later, to ease this part - But the endpoints themselves are dynamically created on the fly, by creating files in a folder, that contains "Hyperlambda" files (the YAML parts), which are then the declaration of the endpoints, and executed dynamically using a dynamic URL resolver, loading up, parsing, and "executing" these files. Which is why it works on the fly, without any compilation occurring, etc ...

    Hyperlambda is (kind of) a DSL engine on top of .Net 5 to oversimplify the explanation. And Hyperlambda files are created "on the fly" and persisted on the file system of the server, and then resolved dynamically using a generic/wildcard HTTP resolver I've built myself.

    As to the target audience, I feel a bit stupid to admit I've got no idea. I personally think it's the best thing since sliced bread, and I use it on everything I do in my day job, but I created it of course, so I'm not really neutral. Apparently few other developers have (so far) seen the need, which I find amazing, since it solves ~50% of my job, by simply clicking a button. Some looks at it and said "so, it's CRUD, who cares" - CRUD is 50% of my job is my answer. However, I suspect "data people" might be more interested, which is why I wanted to show it here 🙂

    As to competitors, there are some products out there, PostgREST being one (only for PostgreSQL though). Some other guys I saw on Twitter (can't remember their names) have something similar, but they're hosting the stuff, and not allowing you to install it on your own servers, and it's much less mature (Guids for security tokens, etc). Then of course there's the whole "NoCode thing", but 99% of these things are creating mostly frontend stuff, with static backends - This thing generates a dynamic backend, accommodating your exact needs for data, based upon meta information retrieved from your database.

    QueryFirst seems interesting. I personally believe SQL has been given way too little love, after everybody turned into the "ORM psychosis" some 15-20 years ago. Controlling your SQL gives you a lot of things ORM tools simply can't give you, ever - Which Dapper itself is one manifestation of. I sometimes feel for starting an "Anti ORM manifesto" 😉

    However, QueryFirst needs to be compiled I think. One of the most important things for me, was to avoid this step, and create something that could be more dynamic in nature, not requiring compilation or re-deployments after having applied changes, arguably getting the benefits from both statically compiled programming languages, and mixing this up with dynamic capabilities of things such as Python and PHP, although in a ".Net/CLR Context" - Which allows for creating "throw away endpoints" for import/export types of jobs, and opens up a whole new axiom the way I see it, but QueryFirst is definitely a step in the "right direction" the way I see it 🙂

  • Golly gosh, I wrote a wall of text - Sorry man :/

  • polterguy wrote:

    The queries ends up wrapping a DataReader in .Net 5, which although technically are created to handle SELECT statements better, still works perfectly for also INSERT/UPDATE/DELETE statements, but you can change these bindings manually in code afterwards if you wish, to use Execute and Scalar invocations on the SqlCommand/MySqlCommand.

    Allocating a DataReader for every request is a decision many developers will not agree with.  If a data request fails due to an exception thrown by the DB Engine is the SQL error captured?  When a parameter is added to a POST method the endpoint then expects a JSON message body.  How are URL parameters added to POST methods which also require a message body?

    polterguy wrote:

    As to Stored Procedure, I'm using it in my day job to execute a SP that returns a record set, which I'm then iterating over, to invoke HTTP POST invocations towards MailChimp (importing emails), executing this in a scheduled task once per day, so this should be painless. As to binding queries to CLR, it's an abstraction on top of the CLR (think YAML bindings towards C#), so it returns "YAML kind of data" (internally within C# it's a name/value/children class called "Node"), so the question isn't really relevant.

    However, it does bind towards SqlConnection/MySqlConnection and SqlCommand/MySqlCommand, and DataReaders, dynamically retrieving all rows and columns, creating a "Node" instance, resulting in the record set, that's later converted to JSON when returned to the client over the HTTP connection.

    Type conversions are generally to be avoided, no?  Did you know the Entity Framework converts query results to IQueryable?

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Allocating a DataReader for every request is a decision many developers will not agree with.

    Hmm, I checked up my own code (this part was created a long time ago), and it actually is using ExecuteNonQueryAsync - Sorry for misleading you here. Only "data.read" encapsulates a DataReader. For the record, I've measured it against Dapper and ADO.NET, and it only performs marginally worse than ADO.NET, on pair with Dapper actually - Implying it is roughly 5 times less CPU intensive than Entity Framework on basic CRUD.

    If a data request fails due to an exception thrown by the DB Engine is the SQL error captured

    Everything is logged, and you can catch the exception in Hyperlambda, and transform it into a transparent HTTP response if you wish - Although, by default (for security reasons) no exceptions are returned to the client.

    When a parameter is added to a POST method the endpoint then expects a JSON message body. How are URL parameters added to POST methods which also require a message body?

    You can combine them as you see fit, but internally the URL resolver doesn't discriminate between them, and you can't really either in your own Hyperlambda code. If you need to discriminate between JSON payload args and QUERY args, you'll have to create your own API Controller. However, just like attributes in XML, being able to discriminate between QUERY args and JSON args is arguably (pun!) not a very good idea ... 😉

  • You can see the URL resolver here for the record. It's not perfect, among other things I'm struggling a bit with how to dynamically handle all types of content, but where it doesn't suffice, there's always the ability to create your own Controller in C#. Everything starting with "magic" though, is flushed through the Magic Controller's router, resulting in a Hyperlambda file being evaluated ...

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply