Under Community Review

Usage and optimization report

A user asks me the question: How can I use the report mechanism to make my publication warning and error free.
Indeed, I can imagine it, I have no idea how to help the user. 

As an example, a short excerpt from the 5100 notifications:

Screenshot of Tridion Docs Ideas report with multiple warning messages, each containing a GUID and error description, making it difficult for users to interpret.

How can a user still work with this without getting dizzy from the GUIDs.

The concrete idea:
Look at the report through the eyes of a user - and involve them - and make the report usable...

Parents
  • Thanks for posting this. As you can imagine, a more readable report is something we'd like to enable, though current priorities as you'll have seen in roadmap presentations are for some technical modernization that will indirectly speed up our ability to make such changes and many more. We're certainly keeping an eye also on votes for this idea and the similar ones posted previously.

    One practical point – I can imagine that a readable report could help with half the GUIDs listed in your screenshot but not the other half. The reason is that if there's a broken link, the publication itself has no idea what was supposed to be the link target, and to be honest if the original target was removed from the repository then there'd be no way at all to get that. 5100 lines does sound like a lot, and if the majority are about broken links and suchlike then I wonder whether it's worth revisiting the authoring process itself? Affordances in the authoring bridge and in Collective Spaces are designed to minimize the changes of broken links in the first place so I wonder whether there was a mass change to content that caused this?

    But anyway the general point about readability of reports is well taken.

Comment
  • Thanks for posting this. As you can imagine, a more readable report is something we'd like to enable, though current priorities as you'll have seen in roadmap presentations are for some technical modernization that will indirectly speed up our ability to make such changes and many more. We're certainly keeping an eye also on votes for this idea and the similar ones posted previously.

    One practical point – I can imagine that a readable report could help with half the GUIDs listed in your screenshot but not the other half. The reason is that if there's a broken link, the publication itself has no idea what was supposed to be the link target, and to be honest if the original target was removed from the repository then there'd be no way at all to get that. 5100 lines does sound like a lot, and if the majority are about broken links and suchlike then I wonder whether it's worth revisiting the authoring process itself? Affordances in the authoring bridge and in Collective Spaces are designed to minimize the changes of broken links in the first place so I wonder whether there was a mass change to content that caused this?

    But anyway the general point about readability of reports is well taken.

Children
  • A good report is in my opinion essential to create good quality documentation.

    - The many false positives in relation to xrefs in conferred content has been known for years.
    - Objects not present in the publication are present in the CMS repository 99.9% of the time. You could imagine that via an API call it should be possible to retrieve the title of an object.
    - 5100 lines are indeed a lot, however filtering on Warning type isn't possible. Example warning groups could be: Linking, Image resolutions, Objects, External references etc.
    - A link that can't be resolved, is that a warning or an error? From a documentation perspective this is an error! From system and publication technical perspective, it seems just a warning...
    - If a problem does not appear to be a problem, the same warning message will reappear on a new report, there is no way to uncheck these warnings.

    A lot of warnings are inherent to large publications. The number of warnings is not the problem, it's about working with the information from the report.

  • Hi Roy, thanks for the reply. I'm sorry if my second paragraph seemed prevaricating. It was a couple of thoughts based on my own experience of managing a large authoring team's processes in Tridion Docs - basically we had a zero tolerance approach to broken links and used the publish reports / baseline freezing as a last integrity check to catch anything that had slipped through the normal protections before that. (Obviously you cannot freeze a baseline when there are broken references, so that indicates that in general this is seen as an error, though there are a few customers who have good reasons for publishing while there are still broken links.)

    Could you elaborate on "The many false positives in relation to xrefs in conreffed content has been known for years"? If you could describe what's happening, that could be a clue as to the high number of broken links you describe.

    Though again, the general point on readability is not missed -- again, votes and accounts from other customers help us to understand the common situations and assess possible enhancements, for example whether most customers have a particular role to help validate publications at the end of the cycle, or if it's generally expected that most technical authors should be able to do troubleshooting.

  • Hello Joe,

    I think it goes into too much detail to discuss all the scenarios here.
    Perhaps we can schedule a session for that sometime.

    But still an example that quite a few users suffer from:

    Diagram showing a section of topic A reusing content from topic B with an unresolved xref link.

    A has used for example a section of topic B through conref.
    B has a <xref> which links to another topic. Topic B in itself is not part of the publication output since only the section is used in A.
    In the above example, the report indicates that this link cannot be resolved, so, a false/positive.
    And we use reuse quite a lot!

    In this way, the real error messages are indistinguishable from the false/positives.
    It just takes a lot of time to figure out if it is a false positive or not.

    Scenario projected in publication manager.

    Screenshot of Tridion Docs interface with an error message about an unresolved xref link in a reused section.
    You can imagine it gets even worse when B contains reuse as well!