0

Hi everyone

I have a python script that continuously collects data for me. I want to be able to display that data on a node js server. How should I go about this? I was thinking of maybe having the python script send get requests to the server but I feel that is not the right answer. Let me know if u guys need more info, thanks!

Comments
  • 3
    Sounds ominous.

    3 options come to my mind without thinking much...

    Shared storage (e.g. database)
    Push (ing data)
    Pull (ing data)

    Shared storage makes most sense when they reside on the same host, though many people might start throwing stones at me - sharing storage means that you just coupled tightly the two services together. Which isn't bad if this is just a pet peeve project, but in something long term and important I wouldn't do this.

    Push (ing) and pull (ing) is what you had in mind. The question wether to pull or to push is a bit tricky - both achieve more or less the same result, there are just few nit bits on the difference.

    Pulling is easier than pushing if resource limits come into play. After all, you can adapt the bandwidth dynamically solely on the pulling site, e.g. reducing elements to fetch when certain criteria like memory consumption is met. Plus - if the other endpoint is down - error handling is easier.

    Pushing is tricky... For the same reasons. To achieve adaptability of bandwidth, you must exchange additional information with the target. If the target is down, a pushing service needs some failsafe information / strategy, if no data should be "lost".

    A pulling service is easier in error handling as it just needs to know the last element and ask again for it, a pushing service should verify what the other side has received or not.
  • 1
  • 0
    @IntrusionCM , @DeepHotel, both python and node have general SQL adaptors, so even if you put a database between them the two are still loosely coupled.
    If it is a cloud deployment, store the data in an elastic cloud service like dynamodb or bigquery.
    If it is an on-premises stint, SQLite could do the trick
  • 1
    @JsonBoa No. They're tightly coupled.

    It might be a bit of a "philosophical" discussion - depending on argumentation and context - but to give my point of view:

    A is an inserting service. The inserting service mandates the data structure and it's representation in the storage medium.

    The storage medium is an intermediary, it just represents data in a way A mandates.

    B, the node service or reader, is connected to the storage medium (intermediary) and has no direct dependency on A, thus loosely.
    But it reads the data representation from the storage medium in the way A mandates.

    Thus - if A changes the data representation, B must change.
    If B must change, C must change.

    Transitive dependency.

    Which should be seen nowadays as a form of tight coupling - or non loosely coupling - as the intermediary / storage medium forms an dependency between A and B.
  • 0
    @IntrusionCM , I agree that this is a rather philosophical discussion depending on scope. My point is a bit different

    If you consider the data model (i.e. the exact keys in a JSON or specific columns in a table) then nearly all systems are tightly coupled - if Netflix changes the "coming_soon" JSON key to "c_s" and do not update the app, the "coming soon" widget stops working.

    However, there is the data-model-agnostic scope, a rather "architectural" level that mostly concerts *stability* and *scalability*.
    If your data producer stops working, the Node app can still display data from the database. If the Node app stops working, you can still monitor data and store it. If the database is down, nothing works, but off-the-shelf DBs are heavily optimized for stable availability.
    Also, you can make changes in both extremities as long as the data model stays the same, including adding more producers and/or consumers.
    Thus, the extremities are "loosely coupled".

    Academic discussion indeed.
Add Comment