A
Arvi Laanemets
Hi
I use a program named Angry IP Scanner to scan computers in our LAN.
Scanning results are saved as text files, where results table is a fixed
width text column table (every column 22 characters wide) with a some
non-structured info at top (program name, scan date and time, empty rows)
and bottom (who scanned, empty rows again).
Hitherto I used an Excel application to read data in from selected text
file, to modificate them (replace all "N/A" and "N/S" with empty strings,
replace "Dead" in Ping column with a numeric value 9999, replace all other
entries in Ping column with ping value in milliseconds, etc.) and add to
excel table along with scan date and time read from text file header. This
works fine and is fast enough, but getting warious reports from this table
is different matter. As the table grows, the reports take more and more time
to recalculate. So I started to think about switching to Access.
Now maybe someone here has a good advice, which is the best way in Access to
read data from text files in? I can see 2 possible solutions:
1. Data are read into predefined empty interim Access table (text columns).
Abundant rows from top and bottom of table are deleted (I think the number
of abundant rows is always same). An append query reads data from interim
table, (adding date and time columns,) makes necessary modifications, and
adds them into final table, after that all rows in interim table are
deleted.
2. Data are read row-wise from text file directly, modified, and saved then
as memory variables. After that a row is added into final table with memory
variables as values. The cycle is repeated until all data rows are inserted.
(This is a solution quite similar to one used in my Excel application.)
Thanks in advance for every advice
I use a program named Angry IP Scanner to scan computers in our LAN.
Scanning results are saved as text files, where results table is a fixed
width text column table (every column 22 characters wide) with a some
non-structured info at top (program name, scan date and time, empty rows)
and bottom (who scanned, empty rows again).
Hitherto I used an Excel application to read data in from selected text
file, to modificate them (replace all "N/A" and "N/S" with empty strings,
replace "Dead" in Ping column with a numeric value 9999, replace all other
entries in Ping column with ping value in milliseconds, etc.) and add to
excel table along with scan date and time read from text file header. This
works fine and is fast enough, but getting warious reports from this table
is different matter. As the table grows, the reports take more and more time
to recalculate. So I started to think about switching to Access.
Now maybe someone here has a good advice, which is the best way in Access to
read data from text files in? I can see 2 possible solutions:
1. Data are read into predefined empty interim Access table (text columns).
Abundant rows from top and bottom of table are deleted (I think the number
of abundant rows is always same). An append query reads data from interim
table, (adding date and time columns,) makes necessary modifications, and
adds them into final table, after that all rows in interim table are
deleted.
2. Data are read row-wise from text file directly, modified, and saved then
as memory variables. After that a row is added into final table with memory
variables as values. The cycle is repeated until all data rows are inserted.
(This is a solution quite similar to one used in my Excel application.)
Thanks in advance for every advice