Extract from Doug Austin’s article “Traditional eDiscovery Workflows Can Be an Information Governance Nightmare”
As I mentioned last week, ILTACON 2021 returned this week as a hybrid conference this year after being conducted completely virtually last year and the in-person component has been conducted in Las Vegas. I have attended in-person and covered the event as press, so I was able to attend several of the sessions, which are always high quality, and this year was no exception.
One of those sessions was titled Discovery, Information Governance and Retention: How Long Should This Go On?, and the panelists (Richard Brooman of Saul Ewing Arnstein & Lehr, Amanda Cook of Acorn Legal Solutions and Stephen Dempsey of The Chemours Company) did a great job discussing the handling of data during the discovery and review process, and what organizations need to retain from these processes during the matter lifecycle.
One of the considerations that they mentioned is one that I don’t think is discussed enough within our industry: how much data is generated during traditional eDiscovery workflows. The amount of data (and especially redundant data from all of the copies of ESI generated during discovery) can be an information governance nightmare!