Thursday, May 2, 2024
 Popular · Latest · Hot · Upcoming
5
rated 0 times [  5] [ 0]  / answers: 1 / hits: 2534  / 3 Years ago, sat, june 5, 2021, 9:17:03

I have file.csv that looks like this



4,6,18,23,26
5,12,19,29,31
2,5,13,16,30
9,10,24,27,32
4,5,10,19,22
4,6,8,10,25
2,3,4,25,11


I want to find some patterns and save them in another log file file.log and remove them from the first file. Perl or grep ideally




  • for instance, if x+1 = x2, in range of 3, remove the row and log its existence in another file and where it existed. So then 2,4,5,25,11 will be removed from file.csv and in file.log I would find something like row 7: 2,3,4,25,11 was removed from file.csv. I'm trying to find sequences


More From » grep

 Answers
5

If we interpret your requirement to mean that the value of the third field (column) should be one more than that of the second field (column), then with awk you can do things like



awk -F, '
$3==$2+1 {print "row "NR": "$0" was removed from "FILENAME > "file.log"; next}1
' file.csv > newfile.csv


which will create your file.log as specified and write the remaining lines to newfile.csv. You can rename newfile.csv to file.csv after to simulate deletion.


[#23757] Sunday, June 6, 2021, 3 Years  [reply] [flag answer]
Only authorized users can answer the question. Please sign in first, or register a free account.
ttarum

Total Points: 417
Total Questions: 101
Total Answers: 115

Location: Maldives
Member since Wed, Nov 4, 2020
4 Years ago
ttarum questions
Sat, Aug 20, 22, 12:42, 2 Years ago
Wed, Sep 28, 22, 18:07, 2 Years ago
Mon, Feb 7, 22, 20:23, 2 Years ago
;