Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Yep, I've been forced to do something similar. SharePoint is generally a trashfire that even the odata interface can't compensate for.
If you need a db, just use a db 😆 -
joeyhops8464y@SortOfTested 🙄 You're telling me! SharePoint is good for what SharePoint has always been good for; Documents. Other than that, if you're trying to do something in SharePoint that doesn't involve documents, you probably want to use something else 😂
-
@joeyhops
Yep. But hey, look at the bright side, no one at your company has suggested "SharePoint-as-a-message bus" with fragile AF workflow httprequest yet. #winning -
*disgusted, mortified face*
Well. Greek mythology had echidna, the mother of all monsters.
Guess you did a pretty good job of implementing her modern equivalent in the 21st century. -
joeyhops8464y@IntrusionCM Oh my friend, you don't know the half of it o.O Wr have to warn new devs when looking at it that this project is.... Well "A Beast" is putting it lightly
-
@joeyhops ;)
I've seen my fair trade of insanity.
I guess reinventing a DB isn't joyful.
That's what you did.
Afaik Sharepoint has no transaction support, so in a nutshell you must have invented some kind of (semi-) transactional system that wraps SharePoint. Your caching is a part of that.
And yes... Handling a process state on a shared resource without transactions or ACID isn't fun.
Lot's of brain damage. -
@IntrusionCM
It's technically transactional as long as you avoid the odata interface. As ironic as that is. -
@SortOfTested
ah. I barely remember it.
Looong looong time ago I looked at Share Point for an import / export process.
And left the company before the planning phase ended.
Which was a good thing... XD ;) -
@IntrusionCM
True dat. Rest calls are all transactional, so there's not really any bonus points to them for not fucking that up 😆 -
joeyhops8464y@IntrusionCM 😂😂 You're not wrong at all unfortunately. It's funny that you mention transactions, because it's absolutely been asked up more than once (both by my team members and the client) if we can implement an ACTUAL transaction-esque system to our... System. Luckily we've been too swamped with other stuff to give it much attention...
It's a stupid (but stupidly lucrative) project ¯\_(ツ)_/¯ -
@SortOfTested
Well... Some APIs are amazingly talented at fucking up error handling.
So at least a bonus point if that works without having to double check or doing weird shit *TM.
I remember in this context (sorry for hijacking the rant) one internal API in that company that did the following (REST based):
- fetchResource returns 200 always
- response contains an UUID
- call fetchResourceStatus API with UUID to get the result set
- call fetchResourceError API with UUID to get the error
A very special person was very proud for his "absolute clear and intuitive design".
Go figure. -
joeyhops8464y@IntrusionCM Heh, sounds like somebody schooled this guy on the Single Responsibility Principle 🙄😂
Related Rants
First let me start this rant by saying: Don't use SharePoint lists as your primary data store if you can avoid it. You're gonna have a bad time.
My coworkers and I work on a system where we need to pull tons of data down from a SharePoint site and run various algorithms and operations on it. Generate reports, that sort of thing. This is all done in the browser using a Typescript React SPFX webpart. Basically using SharePoint as a DB/DAL.
Because of the sheer amount of data we end up pulling down (our system in production is the single source of truth for one of the largest companies in Canada, and they're currently building a pipeline as we speak), in order to maintain a reasonable speed while using it, we have some pretty intense caching logic implemented, logic that ensures we get new items when new items are detected, and merges changes to already exisiting objects. It's pretty brilliant, and that's before we even consider the custom paging that my coworker implemented in order to get around the IndexedDB max size of 100MB.
Well that's all well and good, and works great in production, but it is a horror to work with. Because EVERYTHING we touch on the server is cached locally, it can be IMPOSSIBLE to detect data anomalies, be they local or server side -.- You don't know how many hours I have completely WASTED fixing a "bug" that didn't really exist... Just incorrect data in the cache
rant
sharepoint
annoying
data
caching
time
browser
db
wasted