Release Notes 5.0.699.13580

When customers can expect this release:

Group 1: June 16, 2020

Group 2: June 17, 2020

Group 3: June 25, 2020

Bug Fixes:

PNE-949

Opportunity Pareto - Default Start and End date are not displayed in the header and in Settings

Description

Steps to reproduce:

  1. Go to

https://qatesting.shoplogix.com/whiteboard/#/

  1. Select any machine within Plant that has Opportunity Pareto enabled (ex: Richard Kendricks) > Pareto

Expected behavior: The report is displayed with the following:
End = End Time of the previous shift
Start = End - 24 hours
Level 1 = Reason

Actual behavior: Default Start and End time are missing from header and in Settings (Start and End fields)

FIXED

PNE-945

Changing Shift Hours to Line Units Stops View from Loading

Description

When you change units in the shift hours view for this specific machine on the mdlz-baddiqa it won't return any data. Since this is an EOL machine the missing counts are reflected in other line views as well. I've double checked that the line cycle factor is not set to 1 and tried various values but the issue persists.

Base Units:
https://mdlz-baddiqa.shoplogix.com/whiteboard/#/shifthours/9267DF23-E7C4-CF56-8954-81E4DA3BB3B4/start/20200525T060000.000/gauge1=1&gauge2=3&gauge3=5&timeMetric=0&showoccurrs=true&cycleDP=1&cycleTOL=0&gaugeTOL=0&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1&noperiod=true

 

Line Units:
https://mdlz-baddiqa.shoplogix.com/whiteboard/#/shifthours/9267DF23-E7C4-CF56-8954-81E4DA3BB3B4/start/20200525T060000.000/gauge1=1&gauge2=3&gauge3=5&timeMetric=0&showoccurrs=true&cycleDP=1&cycleTOL=0&gaugeTOL=0&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1&noperiod=true&units=3

 

Let me know if you need more info or are having trouble reproducing.

Environment

None

Attachments

  • image-2020-05-28-13-52-37-265.png

FIXED

PNE-928

Waterfall with database produces timeout

Description

Steps to reproduce:

  1. click this link:

https://mdlz-baddiqa.shoplogix.com/whiteboard/#/oeewaterfall/area/4/start/20200401T140000.000/end/20200520T140000.000/yesoee=true&oeeType=0

  1. F12.. refresh..

  2. look at the 500 error:

{"statusCode":"InternalServerError","message":"Something went horribly, horribly wrong while servicing your request.","details":"Nancy.RequestExecutionException: Oh noes! ---> Npgsql.NpgsqlException: Exception while reading from stream ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond\r\n at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)\r\n — End of inner exception stack trace ---\r\n at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)\r\n at Npgsql.NpgsqlReadBuffer.<>c_DisplayClass31_0.<<Ensure>gEnsureLong|0>d.MoveNext()\r\n --- End of inner exception stack trace ---\r\n at Npgsql.NpgsqlReadBuffer.<>cDisplayClass31_0.<<Ensure>gEnsureLong|0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at Npgsql.NpgsqlConnector.<>cDisplayClass161_0.<<ReadMessage>gReadMessageLong|0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at Npgsql.NpgsqlDataReader.<NextResult>d46.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Npgsql.NpgsqlDataReader.NextResult()\r\n at Npgsql.NpgsqlCommand.<ExecuteDbDataReader>d100.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior)\r\n at Npgsql.NpgsqlCommand.ExecuteReader()\r\n at Shoplogix.Server.SqlCoreDataRepository.ProcessOeeWaterfall(IEnumerable`1 machines, IEnumerable`1 legends, Instant startInstant, Instant endInstant, Boolean isLine) in C:\\Git\\pne\\visualstudionet\\Server\\Shoplogix.Server\\Reports\\CoreData
SqlCoreDataRepository.cs:line 846\r\n at Shoplogix.Server.BuildOeeWaterfallResponse.ProcessAggregateDataDatabase(IEnumerable`1 machines, Instant startInstant, Instant endInstant, Boolean isLine) in C:\\Git\\pne\\visualstudionet\\Server\\Shoplogix.Server\\Reports\\OeeWaterfall
BuildOeeWaterfallResponse.cs:line 273\r\n at Shoplogix.Server.BuildOeeWaterfallResponse.BuildFinalResult(String name, IEnumerable`1 machines, DateTime start, DateTime end, WaterfallRequestType requestType, Boolean isLine, Boolean areaLine) in C:\\Git\\pne\\visualstudionet\\Server\\Shoplogix.Server\\Reports\\OeeWaterfall
BuildOeeWaterfallResponse.cs:line 92\r\n at Shoplogix.Server.ApiOeeWaterfall.<.ctor>b4_0(Object parameters) in C:\\Git\\pne\\visualstudionet\\Server\\Shoplogix.Server\\Reports\\OeeWaterfall
ApiOeeWaterfall.cs:line 100\r\n at Nancy.Routing.Route.<>cDisplayClass15_0.<Wrap>b_0(Object parameters, CancellationToken context)\r\n — End of inner exception stack trace ---\r\n at Nancy.NancyEngine.InvokeOnErrorHook(NancyContext context, ErrorPipeline pipeline, Exception ex)"}

 

QA13 replication: https://qa13.shoplogix.com/whiteboard/#/oeewaterfall/area/2/start/20200101T140000.000/end/20200520T140000.000/yesoee=true&oeeType=0

PNE-782

Line Summary - If there is no job running on End of Line machine, production is not displayed

Description

If there is no job running on End of Line machine, production is not displayed, although there was Good Production on the End of Line machine.

Linesummary ex:
https://qatesting.shoplogix.com/whiteboard/#/linesummary/area/7/start/2020-03-13T07:00:00.000/end/2020-03-13T15:00:00.000/snaptoshift=false&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1

Note: No good production displayed for Carla and Dinesh

Carla shifthours:
https://qatesting.shoplogix.com/whiteboard/#/shifthours/76FF2128-2DF6-382A-0ED7-AD1CAFC21B52/start/20200312T141246.000/gauge1=1&gauge2=8&gauge3=0&timeMetric=0&cycleDP=1&cycleTOL=0&gaugeTOL=0&overCycling=0&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1&units=3

Dinesh shifthours:
https://qatesting.shoplogix.com/whiteboard/#/shifthours/9AA9FA61-0908-66D7-EB56-AD1B43DD68B6/start/20200313T133728.000/gauge1=1&gauge2=8&gauge3=0&timeMetric=0&cycleDP=1&cycleTOL=0&gaugeTOL=0&overCycling=0&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1&units=3

Good Production is displayed for Richard Kendricks which has running job in specified period.

PNE-697

Setup is missing for Plant Meeting and Line Summary pareto

Description

Example:

Plant Meeting - Line 15
https://qatesting.shoplogix.com/whiteboard/#/plantlevellinemeeting/area/5/start/2020-02-19T07:00:00.000/end/2020-02-19T15:00:00.000/snaptoshift=false&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.02&scrapyellow=0.05&scrapred=0.1&cycleTOL=0&gaugeTOL=0&cycleDP=1

Line Summary
https://qatesting.shoplogix.com/whiteboard/#/linesummary/area/7/start/2020-02-19T07:00:00.000/end/2020-02-19T15:00:00.000/snaptoshift=false&goalgreen=1&goalyellow=0.75&goalred=0.5&scrapgreen=0.15&scrapyellow=0.19&scrapred=0.2&layoutoption=1

It should show Setup since Richard Kendricks (End Of Line Machine) was in Setup fro 2 hours.

PNE-614

Have to pick sub-job for existing scrap

Description

Prerequisite: Running 1 job with 2 sub-jobs

Add scrap for specific Job and Sub-Job

Select scrap entry from step 1

Expected result: Scrap Edit screen is displayed where user can change scrap amount for the job and sub-job

Actual result: List of jobs is displayed

CS-2528

Unable to access query on saas20 server.

Description

When i run the query in order to do the weekly Magna reports, this error occurs.
I already tried to delete all the cache, and the error occurred second time.

2020-06-08 10:43:43.0940 02208:043 INFO Shoplogix.Server.Services.ConnectorProcessService CN_mptRamosArizpe 6060a1e0-2f9a-4cf9-abbc-724c25217115 6/8/2020 10:43:27 AM Start
2020-06-08 10:43:51.7967 02208:013 ERROR Error - https://saas20.shoplogix.com:443/web/api/export/summary?Start=20200531&End=20200607&metrics=OEE,availability,Performance,quality&groupBy=plant&format=xml - An item with the same key has already been added.
EXCEPTION System.ArgumentException: An item with the same key has already been added.
at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource)
at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
at Shoplogix.Server.Metrics.UpdateJobMetrics(CoreDataRecord c, LegendRecord legend, RequestParameters exportParams, Query query, JobRecord intersectedJob) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\Metrics.cs:line 261
at Shoplogix.Server.Metrics.Update(CoreDataRecord c, LegendRecord legend, RequestParameters exportParams, LegendMachineState state, Query query, JobRecord intersectedJob) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\Metrics.cs:line 142
at Shoplogix.Server.BuildResponse.ProcessData(IEnumerable`1 machines, RequestParameters exportParams) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\BuildResponse.cs:line 686
at Shoplogix.Server.BuildResponse.GroupByPlant(IEnumerable`1 machines, RequestParameters exportParams, IEnumerable`1 plants) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\BuildResponse.cs:line 96
at Shoplogix.Server.ApiDataExport.RoutingControl(ApiExportGroupBy group, RequestParameters exportParams, Nullable`1 areaGroupRoot) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\ApiDataExport.cs:line 561
at Shoplogix.Server.ApiDataExport.<.ctor>b__7_7(Object parameters) in D:\BuildAgent\work\3378db5a28cef30e\visualstudionet\Server\Shoplogix.Server\Reports\Export\ApiDataExport.cs:line 350
at Nancy.Routing.Route.<>c_DisplayClass15_0.<Wrap>b_0(Object parameters, CancellationToken context)
2020-06-08 10:43:55.9101 02208:020 INFO Shoplogix.Server.Services.BatchRunnerService Batch End 12.8168299s, 3.20420845s/connector, 0.23734877962963s/machine

Sample URL

https://saas20.shoplogix.com/web/api/export/summary?Start=20200531&End=20200607&metrics=OEE,availability,Performance,quality&groupBy=plant&format=xml

CS-2510

Curitiba (saas64) has Scorecard errors in web logs again.

Description

saas64 has a large amount of errors related to scorecard in the web logs, if you look back earlier there were also a huge amount of connector log files being created as well. They've been down for approximately two days while Mondelez has been doing internal troubleshooting. If the large amount of connector logs is an indication that this issue will eventually occur then saas122 and saas82 are both going down the same path. Proposed steps for resolution are as follows:

Step 1: Check all servers that received patch 5.0.613.13126 and check for similar errors - DONE: saas122 and saas82 are the only others that indicated they may have a similar issue

Step 2: Fix issue at saas64 - NOT DONE

Step 3: If necessary, apply the same patch to saas122 and saas82 - NOT DONE

Step 4: Check other mdlz servers running main branch version 12521 to see if issue exists there and patch any servers that have the issue. - NOT DONE

Let me know if you need additional information. I'll follow escalation procedure.

Sample URL

https://saas64.shoplogix.com/logs/ https://saas64.shoplogix.com/logs/2020-06-05%20Web.txt

CS-2497

Mondelez West Suzhou CPU Utilization and Log Errors

Description

The West Suzhou plant from Mondelez has been complaining about poor performance. Upon looking into server metrics we noticed that they regularly hit 100% CPU usage and there were several fatal errors in the web logs. See below for more detail:

CPU Utilization:

 

Logs:

 

Let me know if any additional info is needed, thanks!

Sample URL

https://saas128.shoplogix.com:4443/logs/2020-06-04%20Web.txt

Attachments

CS-2464

API Call Failure - Cannot pull "Capacity" after a recent update in the last month

Description

I run a monthly IAC query based on

https://saas24.shoplogix.com/web/api/export/summary?Start=20200523T000000&End=20200523T235959&metrics=uptimeHours,downtimeHours,total,scrap,OEE,availability,performance,quality,capacity,OEEc,&groupBy=plant,machine,&format=json

I am no longer able to pull 'capacity' on any server that was patched recently.

This is the EXPORT API and I am extremely concerned that a query worked 1 month ago and does not work now.

I would like to know what testing is conducted on API queries to ensure reliability, because we are lucky capacity isn't a commonly used variable but if we had lost OEE or Total we would be having a lot irate customers.

These query works on a MDLZ server using a very old version, and does not work on any newly updated servers.

Sample URL

Doesn't work on SAAS24 - IAC Version: 5.0.0.13148 https://saas24.shoplogix.com/web/api/export/summary?Start=20200523T000000&End=20200523T235959&metrics=uptimeHours,downtimeHours,total,scrap,OEE,availability,performance,quality,capacity,OEEc,&groupBy=plant,machine,&format=json Doesn't work on Demo 5.0.0.13148 https://demo.shoplogix.com/web/api/export/summary?Start=20200523T000000&End=20200523T235959&metrics=uptimeHours,downtimeHours,total,scrap,OEE,availability,performance,quality,capacity,OEEc,&groupBy=plant,machine,&format=json Does work on a MDLZ version 5.0.20070.0231 https://saas75.shoplogix.com/web/api/export/summary?Start=20200523T000000&End=20200523T235959&metrics=uptimeHours,downtimeHours,total,scrap,OEE,availability,performance,quality,capacity,OEEc,&groupBy=plant,machine,&format=json

CS-2447

Opp Pareto showing Reasons as “S”

Description

Opportunity Pareto is displaying reasons only as "S" and not showing full names for the reason.

Sample URL

https://saas127.shoplogix.com/whiteboard/#/opportunitypareto/areas/336/start/20200201T000000.000/end/20200301T000000.000/level1/reasonGroup1/level2/reason/maxsub=3&level1=reasonGroup2&level2=reasonGroup1&level3=reason

Attachments

CS-2094

New Job Defaults CF to 0

Description

Entering a job, the dialogue puts in a CF of 0. I see customers leaving at 0 and confusion later. Should be 1 by default.

Sample URL

https://saas127.shoplogix.com/whiteboard/#/shifthours/B96546EB-A8AC-1485-8DEF-58C9A0E527A9 Enter a new job. https://my.shoplogix.com/admin/support_cases/view/5d67ee88-2758-4c19-88e1-11470a0000d9?tab=2

CS-1738

Fix Sort Order of Reasons In Shift-Chrono view, Search/filter/Drop down for downtime

Description

HQ-1551769594

Enhancement Request: In Shift-Chrono view while applying downtime to multiple equipments, Search/filter/Drop down for downtime lists needed instead of having all reasons displayed in lengthy list.

Downtimes need to be shown in their Groups → this will be done as an enhancement in https://slxdev.atlassian.net/browse/CS-2210

List sort order needs to be fixed.

See: https://saas95.shoplogix.com/whiteboard/#/shiftchrono/areas/2

New Features / Status on Long-term Work:

Paging (backend) - Update the POST/PUT requests for new pages and only pass the machine ID, Update THE GET request to return the plant and area where the machine belongs

Description

User Story:

As an operator/supervisor, I only want to see the pages that are related to the Plant and Machine that they originated in, so that I can address problems relevant to me.

Background:

During testing of Paging, @Aleksandar Dimitrijevic (Unlicensed) noticed that pages are being displayed across different plants. For example, a page that was created under Plant X is also displaying in the Paging Portal of Plant Y.

Requirements:

REQ1. When a user creates a new page, ensure that the SLX backend stores the page’s correct Plant ID and Area ID, so that any future GET requests will send the front-end only the relevant pages for the end-user.

REQ2. When a user requests data at machine-level (for example, loading the Machine Hourly view), the SLX backend should send them only pages that originated from that Machine

(NOT NEEDED) REQ 3. When a user requests data at area-level (for example, loading Area Analysis view), the SLX backend should send them only pages that originated from that Area

Example:

  • If the end-user is part of Plant “Chicago”, Area: “Welding”, and an operator working on a machine within this area creates a page, this page should be saved under this correct hierarchy.

  • This way, end-users of the same company, but different plants (e.g. Plant “Washington”) cannot see pages that are not relevant to their day-to-day operations.

  • It also ensures that pages that occur in unrelated machine Areas are not displayed on the front-end in the future (when displaying pages within the frontend)

  • If a page was created under a Machine in Area 2, but the user is in Area 1, then this page should not appear within the interface

  • If a page was created on Machine A, then its page should only appear within Machine A screens (OR the Area that Machine A rolls up to).

UX Requirements/Prototypes (if needed):

N/A

Risks/Assumptions:

  • Assumption 1: When in a machine-level screen, end-user only cares about seeing pages originating from that machine

  • (NOT NEEDED) Assumption 2: When in an area-level screen, end-users only care about seeing pages originating from a machine within that area

Business Owner

Steven / Manny

Acceptance Criteria

  • Can use POSTMAN or something similar to ensure we are capturing and storing the correct Paging record information

  • To ensure this task is working correctly on the front-end, would likely need to also complete the the above linked related ticket (GET request for user to load and receive the right paging records)

  • User should only see relevant pages within Machine-level views

  • (NOT NEEDED) User should only see relevant pages within Area-level views

  • User should only see relevant pages within the Paging Portal (e.g. Paging Portal of Plant “Chicago” should never show pages related to Plant “Washington”

Paging (backend) - Update the POST/PUT requests for new pages and only pass the machine ID, Update THE GET request to return the plant and area where the machine belongs

Description

User Story:

As an operator/supervisor, I only want to see the pages that are related to the Plant and Machine that they originated in, so that I can address problems relevant to me.

Background:

During testing of Paging, @Aleksandar Dimitrijevic (Unlicensed) noticed that pages are being displayed across different plants. For example, a page that was created under Plant X is also displaying in the Paging Portal of Plant Y.

Requirements:

REQ1. When a user creates a new page, ensure that the SLX backend stores the page’s correct Plant ID and Area ID, so that any future GET requests will send the front-end only the relevant pages for the end-user.

REQ2. When a user requests data at machine-level (for example, loading the Machine Hourly view), the SLX backend should send them only pages that originated from that Machine

(NOT NEEDED) REQ 3. When a user requests data at area-level (for example, loading Area Analysis view), the SLX backend should send them only pages that originated from that Area

Example:

  • If the end-user is part of Plant “Chicago”, Area: “Welding”, and an operator working on a machine within this area creates a page, this page should be saved under this correct hierarchy.

  • This way, end-users of the same company, but different plants (e.g. Plant “Washington”) cannot see pages that are not relevant to their day-to-day operations.

  • It also ensures that pages that occur in unrelated machine Areas are not displayed on the front-end in the future (when displaying pages within the frontend)

  • If a page was created under a Machine in Area 2, but the user is in Area 1, then this page should not appear within the interface

  • If a page was created on Machine A, then its page should only appear within Machine A screens (OR the Area that Machine A rolls up to).

UX Requirements/Prototypes (if needed):

N/A

Risks/Assumptions:

  • Assumption 1: When in a machine-level screen, end-user only cares about seeing pages originating from that machine

  • (NOT NEEDED) Assumption 2: When in an area-level screen, end-users only care about seeing pages originating from a machine within that area

Business Owner

Steven / Manny

Acceptance Criteria

  • Can use POSTMAN or something similar to ensure we are capturing and storing the correct Paging record information

  • To ensure this task is working correctly on the front-end, would likely need to also complete the the above linked related ticket (GET request for user to load and receive the right paging records)

  • User should only see relevant pages within Machine-level views

  • (NOT NEEDED) User should only see relevant pages within Area-level views

  • User should only see relevant pages within the Paging Portal (e.g. Paging Portal of Plant “Chicago” should never show pages related to Plant “Washington”

PNE-940

Separate build of a copied version of current whiteboard

Description

Goal:

  • this is the initial step of starting a codemod

  • in total, it will eventually need all copy, translate, and build.

  • Within this task, it will complete copy and build

PNE-937

Waterfall - refresh page only when it will be useful

Description

Problem: Waterfall refreshes every 30 seconds, even when it isn't useful to do so. This brings the database to a halt when the query is for a large date range.

Possible Solution: stop refreshing when date range is > 1 month, or when the current time isn’t included.

Acceptance Criteria:

  • shouldn't see an XHR request to the api when:
    a) the date range does not go beyond current time (can't change).
    b) the date range is more than 30 days in duration (changes will be irrelevant).

  • the refresh should come back if either of those conditions are cleared.

  • (side note: I've noticed waterfall loads last results: it should refresh on first load... probably a false concern but I thought I'd add it just to be sure).

Further considerations (REFINE):

  • this applies to Plant Level Line Summary (PLLS) and potentially all views. PLLS and waterfall are all that are important for now though.

PNE-905

Make QualityChecksService Frequently Configurable

Description

Summary

Need to add the ability to configure the QualityChecksService interval. This should be done similar to ScorecardProcessingService where the interval setting is set in the app config file (e.g. `web.config`).

This is to allow some control over the frequency such that a frequency less than 1 minute can be set for demo servers.

Details

PNE-880

Display Parts Remaining alert type

Description

Make Parts Remaining alerts displayable in new config

PNE-879

Display Time remaining alert type in new Config

Description

Display Time remaining alert type in new configuration

PNE-876

Display Micro Stoppage Alert type in new GUI

Description

Make Micro Stoppage alert type show up correctly in the new config

PNE-875

Add Slow Running Alert type

Description

Make Slow Running alert configurable

 

To make this testable need to populate shift dropdown for shifts with actual shifts from that machine.

I am adding the shift api processing and propagating this into the settings for escalation levels.

PNE-874

Display Setup type Alert

Description

Make Setup type Alert configuration available.

The options should be duplicated from flash configuration

PNE-872

Add AWS EC2 Instance Private IP to PNE Web Page

Description

Add internal IP address to CachView.ashx page eg. https://qa.shoplogix.com/Web/CacheView.ashx

Mock-Up (no extra-line break needed, as displayed here)

Only show if user IsAdministrator https://gitlab.com/shoplogix/pne/-/blob/develop/visualstudionet/Server/Shoplogix.Server/Services/IUserProvider.cs#L16

PNE-770

Paging - Time since calculation is wrong

Description

Follow up from https://slxdev.atlassian.net/browse/PNE-217#icft=PNE-217

https://gitlab.com/shoplogix/pne/-/issues/1169

Updates

After discussions with @Ryan Gallagher (Unlicensed) and @Lokesh Podipireddy (Unlicensed) we see that there’s going to be some backend work needed, where the backend will send the appropriate timezone for the post, and then frontend will display it appropriately.

PNE-760

Pareto Comment Performance Improvement

Description

Summary


Currently Pareto has been limited to:

  • Only showing comments either for a machine or when Machine is picked as a preceding level (e.g. comments will never be grouped for an entire area)

  • Max 50 machines

  • Max 3 month time period

    These limitation were put in place mainly due to performance problems surrounding comments. We would like to see the database to do more grouping of comments and finalizing the output to match what's needed for Pareto.

Server


saas16.shoplogix.com -> Kamtek first saw this problem
saas16clone.shoplogix.com -> a clone of saas16 tanken around November 2019, data from this time frame can be used for testing (e.g. from July 2019 to November 2019 provides many comment records)

Investigation Steps


Use the opportunity pareto report as a guide to see how comments maybe grouped then look at making changes to the way the database is queried with respect to comments so that the results produced require little to no processing before being outputted.

For an example see: https://saas16.shoplogix.com/Whiteboard/#/opportunitypareto/areas/71/start/20200101T000000.000/end/20200102T000000.000/level1=reason&maxsub=3&level2=machine&level3=comment

Proposed Changes

Lookup and grouping operations will be done on the database

The database will return all the information needed to create the final ParetoDataResponse object, Lookup and grouping operations will be done on the database.

1.- Create a New Repository e.j. “SqlOpportunityParetoRepository” for all DB calls related to pareto.

2.- Create GetParetoDataResponse Function to create the adequate T-SQL depending on the parameters.

3.- Create a mapper function to translate table format database response into ParetoDataResponse multi level structure.

4.- Remove unused code and wire up the new one

Revised Change

Current db changes do not enhance performance and should be disabled until further investigate and development can be done to improve performance by leveraging data in the Database.

UseDatabaseCache flag will be hard-coded to false to bypass database version always:

UseDatabaseCache = false;

PNE-747

New Config - Add wrappers for popups to capture outside clicks and close the popups on those events

Description

For PEConfig, current Accordion and Popup components don't close when a user clicks outside of the box. Other click events are registered but they don't close the popup. Example: clicking the Save button will save the component, and clicking the Cancel button will reload the page.

Requirement: Create a wrapper around these popup components to capture clicks outside the box. This will allow us to close the box and/or validate. We still want the user to be able to Save or Cancel.

Where will these changes be applied? - DataGrid on the Machines Status tab, VariablesTable on Variables tab, and the Alert table in the Alerts tab

Testing Scenarios:

User opens the Machines Status tab, goes into Edit mode, and tries to edit one of the machine states.

  • Clicking outside the popup will close it. User stays in Edit mode.

  • Alternatively clicking the Save button will close the popup and save.

  • Leaving the page by clicking on the Tree, if the user clicks save the popup will close and page gets saved. If they click “Don’t Save” the popup will close and page doesn’t save.

  • Leaving the page by clicking on the Machine name, if the user clicks save the popup will close and page gets saved. If they click “Don’t Save” the popup will close and page doesn’t save.

The same functionality should work for all 3 popups.

Technical Details: Wrap popup content in a react-bootstrap Modal. Currently using react-bootstrap 3. https://react-bootstrap-v3.netlify.app/components/modal/

This has built-in functionality for handling outside clicks + background blur.